diff options
author | Srikar Dronamraju <srikar@linux.vnet.ibm.com> | 2015-06-16 17:25:59 +0530 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-07-07 08:46:10 +0200 |
commit | 2a1ed24ce94036d00a7c5d5e99a77a80f0aa556a (patch) | |
tree | afbfcf9e9bf9138a1dbab6d300a70e2f8036d582 /kernel/sched/features.h | |
parent | 6dfec8d9493f48a42896386b41ec1a4644331b0b (diff) |
sched/numa: Prefer NUMA hotness over cache hotness
The current load balancer may not try to prevent a task from moving
out of a preferred node to a less preferred node. The reason for this
being:
- Since sched features NUMA and NUMA_RESIST_LOWER are disabled by
default, migrate_degrades_locality() always returns false.
- Even if NUMA_RESIST_LOWER were to be enabled, if its cache hot,
migrate_degrades_locality() never gets called.
The above behaviour can mean that tasks can move out of their
preferred node but they may be eventually be brought back to their
preferred node by numa balancer (due to higher numa faults).
To avoid the above, this commit merges migrate_degrades_locality() and
migrate_improves_locality(). It also replaces 3 sched features NUMA,
NUMA_FAVOUR_HIGHER and NUMA_RESIST_LOWER by a single sched feature
NUMA.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mike Galbraith <efault@gmx.de>
Link: http://lkml.kernel.org/r/1434455762-30857-2-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/features.h')
-rw-r--r-- | kernel/sched/features.h | 18 |
1 files changed, 5 insertions, 13 deletions
diff --git a/kernel/sched/features.h b/kernel/sched/features.h index 91e33cd485f6..83a50e7ca533 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -79,20 +79,12 @@ SCHED_FEAT(LB_MIN, false) * numa_balancing= */ #ifdef CONFIG_NUMA_BALANCING -SCHED_FEAT(NUMA, false) /* - * NUMA_FAVOUR_HIGHER will favor moving tasks towards nodes where a - * higher number of hinting faults are recorded during active load - * balancing. + * NUMA will favor moving tasks towards nodes where a higher number of + * hinting faults are recorded during active load balancing. It will + * resist moving tasks towards nodes where a lower number of hinting + * faults have been recorded. */ -SCHED_FEAT(NUMA_FAVOUR_HIGHER, true) - -/* - * NUMA_RESIST_LOWER will resist moving tasks towards nodes where a - * lower number of hinting faults have been recorded. As this has - * the potential to prevent a task ever migrating to a new node - * due to CPU overload it is disabled by default. - */ -SCHED_FEAT(NUMA_RESIST_LOWER, false) +SCHED_FEAT(NUMA, true) #endif |