diff options
author | Pavel Begunkov <asml.silence@gmail.com> | 2022-12-02 17:47:25 +0000 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2022-12-15 08:20:10 -0700 |
commit | a8cf95f93610eb8282f8b6d0117ba78b74588d6b (patch) | |
tree | 7ff8b291a2c3df4c45a9afe14db1d293c3d8a399 /io_uring/rw.c | |
parent | e5f30f6fb29a0b8fa7ca784e44571a610b949b04 (diff) |
io_uring: fix overflow handling regression
Because the single task locking series got reordered ahead of the
timeout and completion lock changes, two hunks inadvertently ended up
using __io_fill_cqe_req() rather than io_fill_cqe_req(). This meant
that we dropped overflow handling in those two spots. Reinstate the
correct CQE filling helper.
Fixes: f66f73421f0a ("io_uring: skip spinlocking for ->task_complete")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring/rw.c')
-rw-r--r-- | io_uring/rw.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/io_uring/rw.c b/io_uring/rw.c index b9cac5706e8d..8227af2e1c0f 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -1062,7 +1062,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin) continue; req->cqe.flags = io_put_kbuf(req, 0); - __io_fill_cqe_req(req->ctx, req); + io_fill_cqe_req(req->ctx, req); } if (unlikely(!nr_events)) |