diff options
author | Kefeng Wang <wangkefeng.wang@huawei.com> | 2024-04-03 16:38:03 +0800 |
---|---|---|
committer | Palmer Dabbelt <palmer@rivosinc.com> | 2024-05-22 16:12:56 -0700 |
commit | 4c6c0020427a4547845a83f7e4d6085e16c3e24f (patch) | |
tree | bbdc6af233a0483e0b4b4bb160e7aa608bf1ba66 /arch/riscv/mm/fault.c | |
parent | 9850e73e82972f518b75dd0d94d2322f44d9191d (diff) |
riscv: mm: accelerate pagefault when badaccess
The access_error() of vma already checked under per-VMA lock, if it
is a bad access, directly handle error, no need to retry with mmap_lock
again. Since the page faut is handled under per-VMA lock, count it as
a vma lock event with VMA_LOCK_SUCCESS.
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Link: https://lore.kernel.org/r/20240403083805.1818160-6-wangkefeng.wang@huawei.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Diffstat (limited to 'arch/riscv/mm/fault.c')
-rw-r--r-- | arch/riscv/mm/fault.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 5224f3733802..b3fcf7d67efb 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -293,8 +293,8 @@ void handle_page_fault(struct pt_regs *regs) if (unlikely(access_error(cause, vma))) { vma_end_read(vma); count_vm_vma_lock_event(VMA_LOCK_SUCCESS); - tsk->thread.bad_cause = cause; - bad_area_nosemaphore(regs, SEGV_ACCERR, addr); + tsk->thread.bad_cause = SEGV_ACCERR; + bad_area_nosemaphore(regs, code, addr); return; } |