1
1
openmpi/opal/mca/hwloc/base/hwloc_base_maffinity.c

174 строки
5.0 KiB
C
Исходник Обычный вид История

/*
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
* Copyright (c) 2011-2012 Cisco Systems, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
#include "opal_config.h"
#include "opal/constants.h"
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
#include "opal/mca/hwloc/hwloc.h"
#include "opal/mca/hwloc/base/base.h"
/*
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
* Don't use show_help() here (or print any error message at all).
* Let the upper layer output a relevant message, because doing so may
* be complicated (e.g., this might be called from the ORTE ODLS,
* which has to do some extra steps to get error messages to be
* displayed).
*/
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
int opal_hwloc_base_set_process_membind_policy(void)
{
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
int rc = 0, flags;
hwloc_membind_policy_t policy;
hwloc_cpuset_t cpuset;
/* Make sure opal_hwloc_topology has been set by the time we've
been called */
if (NULL == opal_hwloc_topology) {
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
return OPAL_ERR_BAD_PARAM;
}
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
/* Set the default memory allocation policy according to MCA
param */
switch (opal_hwloc_base_map) {
case OPAL_HWLOC_BASE_MAP_LOCAL_ONLY:
policy = HWLOC_MEMBIND_BIND;
flags = HWLOC_MEMBIND_STRICT;
break;
case OPAL_HWLOC_BASE_MAP_NONE:
default:
policy = HWLOC_MEMBIND_DEFAULT;
flags = 0;
break;
}
cpuset = hwloc_bitmap_alloc();
if (NULL == cpuset) {
rc = OPAL_ERR_OUT_OF_RESOURCE;
} else {
int e;
hwloc_get_cpubind(opal_hwloc_topology, cpuset, 0);
rc = hwloc_set_membind(opal_hwloc_topology,
cpuset, policy, flags);
e = errno;
hwloc_bitmap_free(cpuset);
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
/* See if hwloc was able to do it. If hwloc failed due to
ENOSYS, but the base_map == NONE, then it's not really an
error. */
if (0 != rc && ENOSYS == e &&
OPAL_HWLOC_BASE_MAP_NONE == opal_hwloc_base_map) {
rc = 0;
}
}
return (0 == rc) ? OPAL_SUCCESS : OPAL_ERROR;
}
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
int opal_hwloc_base_memory_set(opal_hwloc_base_memory_segment_t *segments,
size_t num_segments)
{
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
int rc = OPAL_SUCCESS;
char *msg = NULL;
size_t i;
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
hwloc_cpuset_t cpuset = NULL;
/* bozo check */
if (NULL == opal_hwloc_topology) {
msg = "hwloc_set_area_membind() failure - topology not available";
return opal_hwloc_base_report_bind_failure(__FILE__, __LINE__,
msg, rc);
}
/* This module won't be used unless the process is already
processor-bound. So find out where we're processor bound, and
bind our memory there, too. */
cpuset = hwloc_bitmap_alloc();
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
if (NULL == cpuset) {
rc = OPAL_ERR_OUT_OF_RESOURCE;
msg = "hwloc_bitmap_alloc() failure";
goto out;
}
hwloc_get_cpubind(opal_hwloc_topology, cpuset, 0);
for (i = 0; i < num_segments; ++i) {
if (0 != hwloc_set_area_membind(opal_hwloc_topology,
segments[i].mbs_start_addr,
segments[i].mbs_len, cpuset,
HWLOC_MEMBIND_BIND,
HWLOC_MEMBIND_STRICT)) {
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
rc = OPAL_ERROR;
msg = "hwloc_set_area_membind() failure";
goto out;
}
}
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
out:
if (NULL != cpuset) {
hwloc_bitmap_free(cpuset);
}
if (OPAL_SUCCESS != rc) {
return opal_hwloc_base_report_bind_failure(__FILE__, __LINE__, msg, rc);
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
}
return OPAL_SUCCESS;
}
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
int opal_hwloc_base_node_name_to_id(char *node_name, int *id)
{
/* GLB: fix me */
*id = atoi(node_name + 3);
return OPAL_SUCCESS;
}
Per RFC, bring in the following changes: * Remove paffinity, maffinity, and carto frameworks -- they've been wholly replaced by hwloc. * Move ompi_mpi_init() affinity-setting/checking code down to ORTE. * Update sm, smcuda, wv, and openib components to no longer use carto. Instead, use hwloc data. There are still optimizations possible in the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old carto-based code found out how many NUMA nodes were ''available'' -- not how many were used ''in this job''. The new hwloc-using code computes the same value -- it was not updated to calculate how many NUMA nodes are used ''by this job.'' * Note that I cannot compile the smcuda and wv BTLs -- I ''think'' they're right, but they need to be verified by their owners. * The openib component now does a bunch of stuff to figure out where "near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors (I do not have a NUMA machine with an OpenFabrics device that is a non-uniform distance from multiple different NUMA nodes). * Completely rewrite the OMPI_Affinity_str() routine from the "affinity" mpiext extension. This extension now understands hyperthreads; the output format of it has changed a bit to reflect this new information. * Bunches of minor changes around the code base to update names/types from maffinity/paffinity-based names to hwloc-based names. * Add some helper functions into the hwloc base, mainly having to do with the fact that we have the hwloc data reporting ''all'' topology information, but sometimes you really only want the (online | available) data. This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
int opal_hwloc_base_membind(opal_hwloc_base_memory_segment_t *segs,
size_t count, int node_id)
{
size_t i;
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
int rc = OPAL_SUCCESS;
char *msg = NULL;
hwloc_cpuset_t cpuset = NULL;
/* bozo check */
if (NULL == opal_hwloc_topology) {
msg = "hwloc_set_area_membind() failure - topology not available";
return opal_hwloc_base_report_bind_failure(__FILE__, __LINE__,
msg, rc);
}
cpuset = hwloc_bitmap_alloc();
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
if (NULL == cpuset) {
rc = OPAL_ERR_OUT_OF_RESOURCE;
msg = "hwloc_bitmap_alloc() failure";
goto out;
}
hwloc_bitmap_set(cpuset, node_id);
for(i = 0; i < count; i++) {
if (0 != hwloc_set_area_membind(opal_hwloc_topology,
segs[i].mbs_start_addr,
segs[i].mbs_len, cpuset,
HWLOC_MEMBIND_BIND,
HWLOC_MEMBIND_STRICT)) {
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
rc = OPAL_ERROR;
msg = "hwloc_set_area_membind() failure";
goto out;
}
}
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
out:
if (NULL != cpuset) {
hwloc_bitmap_free(cpuset);
}
if (OPAL_SUCCESS != rc) {
return opal_hwloc_base_report_bind_failure(__FILE__, __LINE__, msg, rc);
Refs trac:2698 After a long period of development with many starts and stops, we finally got this where we wanted it. This commit introduces 2 new MCA params (note that the "maffinity_libnuma_policy" MCA param introduced by r24290 was removed when libnuma support was removed). Remember that maffinity policies are only in effect when paffinity is enaabled -- i.e., when processes are bound to processors! * '''maffinity_base_alloc_policy:''' Policy that determines how general memory allocations are bound after MPI_INIT. A value of "none" means that no memory policy is applied. A value of "local_only" means that all memory allocations will be restricted to the local NUMA node where each process is placed. Note that operating system paging policies are unaffected by this setting. For example, if "local_only" is used and local NUMA node memory is exhausted, a new memory allocation may cause paging. * '''maffinity_base_bind_failure_action:''' What Open MPI will do if it explicitly tries to bind memory to a specific NUMA location, and fails. Note that this is a different case than the general allocation policy described by maffinity_base_alloc_policy. A value of "warn" means that Open MPI will warn the first time this happens, but allow the job to continue (possibly with degraded performance). A value of "error" means that Open MPI will abort the job if this happens. This needs at least a little soak time on the trunk before going to v1.5. This commit was SVN r24639. The following SVN revision numbers were found above: r24290 --> open-mpi/ompi@afa654746c1506375ae70864b3ded19fa5b30fcb The following Trac tickets were found above: Ticket 2698 --> https://svn.open-mpi.org/trac/ompi/ticket/2698
2011-04-26 17:31:07 +04:00
}
return OPAL_SUCCESS;
}