diff options
author | Vincent Guittot <vincent.guittot@linaro.org> | 2020-11-02 11:24:57 +0100 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2020-11-10 18:38:48 +0100 |
commit | 16b0a7a1a0af9db6e008fecd195fe4d6cb366d83 (patch) | |
tree | c77db79bb4134607ed7840ec091dffc7ffb20e55 /kernel/bpf | |
parent | a73f863af4ce9730795eab7097fb2102e6854365 (diff) |
sched/fair: Ensure tasks spreading in LLC during LB
schbench shows latency increase for 95 percentile above since:
commit 0b0695f2b34a ("sched/fair: Rework load_balance()")
Align the behavior of the load balancer with the wake up path, which tries
to select an idle CPU which belongs to the LLC for a waking task.
calculate_imbalance() will use nr_running instead of the spare
capacity when CPUs share resources (ie cache) at the domain level. This
will ensure a better spread of tasks on idle CPUs.
Running schbench on a hikey (8cores arm64) shows the problem:
tip/sched/core :
schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10
Latency percentiles (usec)
50.0th: 33
75.0th: 45
90.0th: 51
95.0th: 4152
*99.0th: 14288
99.5th: 14288
99.9th: 14288
min=0, max=14276
tip/sched/core + patch :
schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10
Latency percentiles (usec)
50.0th: 34
75.0th: 47
90.0th: 52
95.0th: 78
*99.0th: 94
99.5th: 94
99.9th: 94
min=0, max=94
Fixes: 0b0695f2b34a ("sched/fair: Rework load_balance()")
Reported-by: Chris Mason <clm@fb.com>
Suggested-by: Rik van Riel <riel@surriel.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Rik van Riel <riel@surriel.com>
Tested-by: Rik van Riel <riel@surriel.com>
Link: https://lkml.kernel.org/r/20201102102457.28808-1-vincent.guittot@linaro.org
Diffstat (limited to 'kernel/bpf')
0 files changed, 0 insertions, 0 deletions