summaryrefslogtreecommitdiff
path: root/kernel/locking/qspinlock.c
AgeCommit message (Expand)Author
2022-04-05locking: Apply contention tracepoints in the slow pathNamhyung Kim
2020-07-08x86/kvm: Add "nopvspin" parameter to disable PV spinlocksZhenzhong Duan
2020-01-17locking/qspinlock: Fix inaccessible URL of MCS lock paperWaiman Long
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157Thomas Gleixner
2019-04-10locking/qspinlock_stat: Introduce generic lockevent_*() counting APIsWaiman Long
2019-02-28locking/qspinlock: Remove unnecessary BUG_ON() callWaiman Long
2019-02-04locking/qspinlock_stat: Track the no MCS node available caseWaiman Long
2019-02-04locking/qspinlock: Handle > 4 slowpath nesting levelsWaiman Long
2018-10-17locking/pvqspinlock: Extend node size when pvqspinlock is configuredWaiman Long
2018-10-17locking/qspinlock_stat: Count instances of nested lock slowpathsWaiman Long
2018-10-16locking/qspinlock, x86: Provide liveness guaranteePeter Zijlstra
2018-10-16locking/qspinlock: Rework some commentsPeter Zijlstra
2018-10-16locking/qspinlock: Re-order codePeter Zijlstra
2018-04-27locking/qspinlock: Add stat tracking for pending vs. slowpathWaiman Long
2018-04-27locking/qspinlock: Use try_cmpxchg() instead of cmpxchg() when lockingWill Deacon
2018-04-27locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb()Will Deacon
2018-04-27locking/qspinlock: Use smp_cond_load_relaxed() to wait for next nodeWill Deacon
2018-04-27locking/qspinlock: Use atomic_cond_read_acquire()Will Deacon
2018-04-27locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queueWill Deacon
2018-04-27locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpathWill Deacon
2018-04-27locking/qspinlock: Bound spinning on pending->locked transition in slowpathWill Deacon
2018-04-27locking/qspinlock: Merge 'struct __qspinlock' into 'struct qspinlock'Will Deacon
2018-02-13locking/qspinlock: Ensure node->count is updated before initialising nodeWill Deacon
2018-02-13locking/qspinlock: Ensure node is initialised before updating prev->nextWill Deacon
2017-12-04locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()Paul E. McKenney
2017-08-17locking: Remove spin_unlock_wait() generic definitionsPaul E. McKenney
2017-07-08locking/qspinlock: Explicitly include asm/prefetch.hStafford Horne
2016-06-27locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec()Pan Xinhui
2016-06-14locking/barriers: Introduce smp_acquire__after_ctrl_dep()Peter Zijlstra
2016-06-14locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()Peter Zijlstra
2016-06-08locking/qspinlock: Add commentsPeter Zijlstra
2016-06-08locking/qspinlock: Clarify xchg_tail() orderingPeter Zijlstra
2016-06-08locking/qspinlock: Fix spin_unlock_wait() some morePeter Zijlstra
2016-02-29locking/qspinlock: Use smp_cond_acquire() in pending codeWaiman Long
2015-12-04locking/pvqspinlock: Queue node adaptive spinningWaiman Long
2015-12-04locking/pvqspinlock: Allow limited lock stealingWaiman Long
2015-12-04locking, sched: Introduce smp_cond_acquire() and use itPeter Zijlstra
2015-11-23locking/qspinlock: Avoid redundant read of next pointerWaiman Long
2015-11-23locking/qspinlock: Prefetch the next node cachelineWaiman Long
2015-11-23locking/qspinlock: Use _acquire/_release() versions of cmpxchg() & xchg()Waiman Long
2015-09-11locking/qspinlock/x86: Fix performance regression under unaccelerated VMsPeter Zijlstra
2015-08-03locking/pvqspinlock: Only kick CPU at unlock timeWaiman Long
2015-05-08locking/pvqspinlock: Implement simple paravirt support for the qspinlockWaiman Long
2015-05-08locking/qspinlock: Revert to test-and-set on hypervisorsPeter Zijlstra (Intel)
2015-05-08locking/qspinlock: Use a simple write to grab the lockWaiman Long
2015-05-08locking/qspinlock: Optimize for smaller NR_CPUSPeter Zijlstra (Intel)
2015-05-08locking/qspinlock: Extract out code snippets for the next patchWaiman Long
2015-05-08locking/qspinlock: Add pending bitPeter Zijlstra (Intel)
2015-05-08locking/qspinlock: Introduce a simple generic 4-byte queued spinlockWaiman Long