summaryrefslogtreecommitdiff
path: root/lib
diff options
context:
space:
mode:
authorZhen Lei <thunder.leizhen@huawei.com>2024-09-04 21:39:41 +0800
committerThomas Gleixner <tglx@linutronix.de>2024-09-09 16:40:26 +0200
commit63a4a9b52c3c7f86351710739011717a36652b72 (patch)
tree63677b8b3a583314735b720d41ed26ed9b4c2047 /lib
parent684d28feb8546d1e9597aa363c3bfcf52fe250b7 (diff)
debugobjects: Remove redundant checks in fill_pool()
fill_pool() checks locklessly at the beginning whether the pool has to be refilled. After that it checks locklessly in a loop whether the free list contains objects and repeats the refill check. If both conditions are true, it acquires the pool lock and tries to move objects from the free list to the pool repeating the same checks again. There are two redundant issues with that: 1) The repeated check for the fill condition 2) The loop processing The repeated check is pointless as it was just established that fill is required. The condition has to be re-evaluated under the lock anyway. The loop processing is not required either because there is practically zero chance that a repeated attempt will succeed if the checks under the lock terminate the moving of objects. Remove the redundant check and replace the loop with a simple if condition. [ tglx: Massaged change log ] Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20240904133944.2124-4-thunder.leizhen@huawei.com
Diffstat (limited to 'lib')
-rw-r--r--lib/debugobjects.c12
1 files changed, 5 insertions, 7 deletions
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 6329a86edcf1..5ce473ad499b 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -135,15 +135,13 @@ static void fill_pool(void)
return;
/*
- * Reuse objs from the global free list; they will be reinitialized
- * when allocating.
+ * Reuse objs from the global obj_to_free list; they will be
+ * reinitialized when allocating.
*
- * Both obj_nr_tofree and obj_pool_free are checked locklessly; the
- * READ_ONCE()s pair with the WRITE_ONCE()s in pool_lock critical
- * sections.
+ * obj_nr_tofree is checked locklessly; the READ_ONCE() pairs with
+ * the WRITE_ONCE() in pool_lock critical sections.
*/
- while (READ_ONCE(obj_nr_tofree) &&
- READ_ONCE(obj_pool_free) < debug_objects_pool_min_level) {
+ if (READ_ONCE(obj_nr_tofree)) {
raw_spin_lock_irqsave(&pool_lock, flags);
/*
* Recheck with the lock held as the worker thread might have