diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2024-03-14 16:25:01 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2024-03-14 16:25:01 -0700 |
commit | 63bd30f249dcf0a7ce16967935cecee8feec24bb (patch) | |
tree | 283d1c6ed71295736a5bfcc8064b22e68f31735b /include | |
parent | 01732755ee30f0862c80b276de6af3611a3ded83 (diff) | |
parent | 2aa043a55b9a764c9cbde5a8c654eeaaffe224cf (diff) |
Merge tag 'trace-ring-buffer-v6.8-rc7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull tracing updates from Steven Rostedt:
- Do not update shortest_full in rb_watermark_hit() if the watermark is
hit. The shortest_full field was being updated regardless if the task
was going to wait or not. If the watermark is hit, then the task is
not going to wait, so do not update the shortest_full field (used by
the waker).
- Update shortest_full field before setting the full_waiters_pending
flag
In the poll logic, the full_waiters_pending flag was being set before
the shortest_full field was set. If the full_waiters_pending flag is
set, writers will check the shortest_full field which has the least
percentage of data that the ring buffer needs to be filled before
waking up. The writer will check shortest_full if
full_waiters_pending is set, and if the ring buffer percentage filled
is greater than shortest full, then it will call the irq_work to wake
up the waiters.
The problem was that the poll logic set the full_waiters_pending flag
before updating shortest_full, which when zero will always trigger
the writer to call the irq_work to wake up the waiters. The irq_work
will reset the shortest_full field back to zero as the woken waiters
is suppose to reset it.
- There's some optimized logic in the rb_watermark_hit() that is used
in ring_buffer_wait(). Use that helper function in the poll logic as
well.
- Restructure ring_buffer_wait() to use wait_event_interruptible()
The logic to wake up pending readers when the file descriptor is
closed is racy. Restructure ring_buffer_wait() to allow callers to
pass in conditions besides the ring buffer having enough data in it
by using wait_event_interruptible().
- Update the tracing_wait_on_pipe() to call ring_buffer_wait() with its
own conditions to exit the wait loop.
* tag 'trace-ring-buffer-v6.8-rc7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
tracing/ring-buffer: Fix wait_on_pipe() race
ring-buffer: Use wait_event_interruptible() in ring_buffer_wait()
ring-buffer: Reuse rb_watermark_hit() for the poll logic
ring-buffer: Fix full_waiters_pending in poll
ring-buffer: Do not set shortest_full when full target is hit
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/ring_buffer.h | 4 | ||||
-rw-r--r-- | include/linux/trace_events.h | 5 |
2 files changed, 7 insertions, 2 deletions
diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h index fa802db216f9..dc5ae4e96aee 100644 --- a/include/linux/ring_buffer.h +++ b/include/linux/ring_buffer.h @@ -98,7 +98,9 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, struct lock_class_key *k __ring_buffer_alloc((size), (flags), &__key); \ }) -int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full); +typedef bool (*ring_buffer_cond_fn)(void *data); +int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full, + ring_buffer_cond_fn cond, void *data); __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu, struct file *filp, poll_table *poll_table, int full); void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu); diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h index d68ff9b1247f..fc6d0af56bb1 100644 --- a/include/linux/trace_events.h +++ b/include/linux/trace_events.h @@ -103,13 +103,16 @@ struct trace_iterator { unsigned int temp_size; char *fmt; /* modified format holder */ unsigned int fmt_size; - long wait_index; + atomic_t wait_index; /* trace_seq for __print_flags() and __print_symbolic() etc. */ struct trace_seq tmp_seq; cpumask_var_t started; + /* Set when the file is closed to prevent new waiters */ + bool closed; + /* it's true when current open file is snapshot */ bool snapshot; |