diff options
author | Liam R. Howlett <Liam.Howlett@Oracle.com> | 2024-08-30 00:00:58 -0400 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2024-09-03 21:15:52 -0700 |
commit | 63fc66f5b6b18f39269a66cf34d8cb7a24fbfe88 (patch) | |
tree | d7861de53cb52d9879fb9642b4b7961373aa1639 /mm/vma.c | |
parent | 13d77e0133908721f7da093ffd3169a92bae8b11 (diff) |
ipc/shm, mm: drop do_vma_munmap()
The do_vma_munmap() wrapper existed for callers that didn't have a vma
iterator and needed to check the vma mseal status prior to calling the
underlying munmap(). All callers now use a vma iterator and since the
mseal check has been moved to do_vmi_align_munmap() and the vmas are
aligned, this function can just be called instead.
do_vmi_align_munmap() can no longer be static as ipc/shm is using it and
it is exported via the mm.h header.
Link: https://lkml.kernel.org/r/20240830040101.822209-19-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Bert Karwatzki <spasswolf@web.de>
Cc: Jeff Xu <jeffxu@chromium.org>
Cc: Jiri Olsa <olsajiri@gmail.com>
Cc: Kees Cook <kees@kernel.org>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Paul Moore <paul@paul-moore.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/vma.c')
-rw-r--r-- | mm/vma.c | 12 |
1 files changed, 6 insertions, 6 deletions
@@ -658,8 +658,8 @@ static inline void vms_clear_ptes(struct vma_munmap_struct *vms, */ mas_set(mas_detach, 1); lru_add_drain(); - tlb_gather_mmu(&tlb, vms->mm); - update_hiwater_rss(vms->mm); + tlb_gather_mmu(&tlb, vms->vma->vm_mm); + update_hiwater_rss(vms->vma->vm_mm); unmap_vmas(&tlb, mas_detach, vms->vma, vms->start, vms->end, vms->vma_count, mm_wr_locked); @@ -672,14 +672,14 @@ static inline void vms_clear_ptes(struct vma_munmap_struct *vms, } void vms_clean_up_area(struct vma_munmap_struct *vms, - struct ma_state *mas_detach, bool mm_wr_locked) + struct ma_state *mas_detach) { struct vm_area_struct *vma; if (!vms->nr_pages) return; - vms_clear_ptes(vms, mas_detach, mm_wr_locked); + vms_clear_ptes(vms, mas_detach, true); mas_set(mas_detach, 0); mas_for_each(mas_detach, vma, ULONG_MAX) if (vma->vm_ops && vma->vm_ops->close) @@ -702,7 +702,7 @@ void vms_complete_munmap_vmas(struct vma_munmap_struct *vms, struct vm_area_struct *vma; struct mm_struct *mm; - mm = vms->mm; + mm = current->mm; mm->map_count -= vms->vma_count; mm->locked_vm -= vms->locked_vm; if (vms->unlock) @@ -770,7 +770,7 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms, * its limit temporarily, to help free resources as expected. */ if (vms->end < vms->vma->vm_end && - vms->mm->map_count >= sysctl_max_map_count) + vms->vma->vm_mm->map_count >= sysctl_max_map_count) goto map_count_exceeded; /* Don't bother splitting the VMA if we can't unmap it anyway */ |