0c0fe022ff
In this implementation, we begin mapping on the first node that has at least one slot available as measured by the slots_inuse versus the soft limit. If none of the nodes meet that criterion, we just start at the beginning of the node list since we are oversubscribed anyway. Note that we ignore this logic if the user specifies a mapping - then it's just "user beware". The real root cause of the problem is that we don't adjust sched_yield as we add processes onto a node. Hence, the node becomes oversubscribed and performance goes into the toilet. What we REALLY need to do to solve the problem is: (a) modify the PLS components so they reuse the existing daemons, (b) create a way to tell a running process to adjust its sched_yield, and (c) modify the ODLS components to update the sched_yield on a process per the new method Until we do that, we will continue to have this problem - all this fix (and any subsequent one that focuses solely on the mapper) does is hopefully make it happen less often. This commit was SVN r12145. |
||
---|---|---|
.. | ||
base | ||
bjs | ||
dash_host | ||
gridengine | ||
hostfile | ||
loadleveler | ||
localhost | ||
lsf_bproc | ||
proxy | ||
slurm | ||
tm | ||
xgrid | ||
Makefile.am | ||
ras_types.h | ||
ras.h |