diff options
author | Peter Zijlstra <peterz@infradead.org> | 2024-04-03 09:50:07 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2024-08-17 11:06:40 +0200 |
commit | 8e2e13ac6122915bd98315237b0317495e391be0 (patch) | |
tree | efaee6b8fb17588e694f857e3ab17217ac9ad640 /kernel/sched/fair.c | |
parent | 949090eaf0a3e39aa0f4a675407e16d0e975da11 (diff) |
sched/fair: Cleanup pick_task_fair() vs throttle
Per 54d27365cae8 ("sched/fair: Prevent throttling in early
pick_next_task_fair()") the reason check_cfs_rq_runtime() is under the
'if (curr)' check is to ensure the (downward) traversal does not
result in an empty cfs_rq.
But then the pick_task_fair() 'copy' of all this made it restart the
traversal anyway, so that seems to solve the issue too.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Ben Segall <bsegall@google.com>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Tested-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lkml.kernel.org/r/20240727105028.501679876@infradead.org
Diffstat (limited to 'kernel/sched/fair.c')
-rw-r--r-- | kernel/sched/fair.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8201f0f4e709..7ba1ca56a63e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8471,11 +8471,11 @@ again: update_curr(cfs_rq); else curr = NULL; - - if (unlikely(check_cfs_rq_runtime(cfs_rq))) - goto again; } + if (unlikely(check_cfs_rq_runtime(cfs_rq))) + goto again; + se = pick_next_entity(cfs_rq); cfs_rq = group_cfs_rq(se); } while (cfs_rq); |