diff options
author | Paul E. McKenney <paulmck@kernel.org> | 2022-04-14 06:56:35 -0700 |
---|---|---|
committer | Paul E. McKenney <paulmck@kernel.org> | 2022-07-21 17:41:56 -0700 |
commit | dd04140531b5d38b77ad9ff7b18117654be5bf5c (patch) | |
tree | 424f03517fcd26e3d5483b664501269aa997bb5f /kernel/rcu/tree.c | |
parent | bf95b2bc3e42f11f4d7a5e8a98376c2b4a2aa82f (diff) |
rcu: Make polled grace-period API account for expedited grace periods
Currently, this code could splat:
oldstate = get_state_synchronize_rcu();
synchronize_rcu_expedited();
WARN_ON_ONCE(!poll_state_synchronize_rcu(oldstate));
This situation is counter-intuitive and user-unfriendly. After all, there
really was a perfectly valid full grace period right after the call to
get_state_synchronize_rcu(), so why shouldn't poll_state_synchronize_rcu()
know about it?
This commit therefore makes the polled grace-period API aware of expedited
grace periods in addition to the normal grace periods that it is already
aware of. With this change, the above code is guaranteed not to splat.
Please note that the above code can still splat due to counter wrap on the
one hand and situations involving partially overlapping normal/expedited
grace periods on the other. On 64-bit systems, the second is of course
much more likely than the first. It is possible to modify this approach
to prevent overlapping grace periods from causing splats, but only at
the expense of greatly increasing the probability of counter wrap, as
in within milliseconds on 32-bit systems and within minutes on 64-bit
systems.
This commit is in preparation for polled expedited grace periods.
Link: https://lore.kernel.org/all/20220121142454.1994916-1-bfoster@redhat.com/
Link: https://docs.google.com/document/d/1RNKWW9jQyfjxw2E8dsXVTdvZYh0HnYeSHDKog9jhdN8/edit?usp=sharing
Cc: Brian Foster <bfoster@redhat.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Ian Kent <raven@themaw.net>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Diffstat (limited to 'kernel/rcu/tree.c')
-rw-r--r-- | kernel/rcu/tree.c | 9 |
1 files changed, 5 insertions, 4 deletions
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b40a5a19ddd2..1505b02b4e53 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1812,6 +1812,7 @@ static void rcu_poll_gp_seq_end(unsigned long *snap) if (*snap && *snap == rcu_state.gp_seq_polled) { rcu_seq_end(&rcu_state.gp_seq_polled); rcu_state.gp_seq_polled_snap = 0; + rcu_state.gp_seq_polled_exp_snap = 0; } else { *snap = 0; } @@ -3913,10 +3914,10 @@ void synchronize_rcu(void) "Illegal synchronize_rcu() in RCU read-side critical section"); if (rcu_blocking_is_gp()) { // Note well that this code runs with !PREEMPT && !SMP. - // In addition, all code that advances grace periods runs - // at process level. Therefore, this GP overlaps with other - // GPs only by being fully nested within them, which allows - // reuse of ->gp_seq_polled_snap. + // In addition, all code that advances grace periods runs at + // process level. Therefore, this normal GP overlaps with + // other normal GPs only by being fully nested within them, + // which allows reuse of ->gp_seq_polled_snap. rcu_poll_gp_seq_start_unlocked(&rcu_state.gp_seq_polled_snap); rcu_poll_gp_seq_end_unlocked(&rcu_state.gp_seq_polled_snap); if (rcu_init_invoked()) |