summaryrefslogtreecommitdiff
path: root/mm/vma.h
diff options
context:
space:
mode:
authorLiam R. Howlett <Liam.Howlett@Oracle.com>2024-08-30 00:00:59 -0400
committerAndrew Morton <akpm@linux-foundation.org>2024-09-03 21:15:52 -0700
commit224c1c702c08ca4d874690991f02e5b08c816e5b (patch)
treed8cac2ba41ff7cb42d3cbb206c32f16cf7ef5a53 /mm/vma.h
parent63fc66f5b6b18f39269a66cf34d8cb7a24fbfe88 (diff)
mm: move may_expand_vm() check in mmap_region()
The may_expand_vm() check requires the count of the pages within the munmap range. Since this is needed for accounting and obtained later, the reodering of ma_expand_vm() to later in the call stack, after the vma munmap struct (vms) is initialised and the gather stage is potentially run, will allow for a single loop over the vmas. The gather sage does not commit any work and so everything can be undone in the case of a failure. The MAP_FIXED page count is available after the vms_gather_munmap_vmas() call, so use it instead of looping over the vmas twice. Link: https://lkml.kernel.org/r/20240830040101.822209-20-Liam.Howlett@oracle.com Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Bert Karwatzki <spasswolf@web.de> Cc: Jeff Xu <jeffxu@chromium.org> Cc: Jiri Olsa <olsajiri@gmail.com> Cc: Kees Cook <kees@kernel.org> Cc: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Mark Brown <broonie@kernel.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: "Paul E. McKenney" <paulmck@kernel.org> Cc: Paul Moore <paul@paul-moore.com> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/vma.h')
-rw-r--r--mm/vma.h3
1 files changed, 0 insertions, 3 deletions
diff --git a/mm/vma.h b/mm/vma.h
index b59d470cc223..45fbc56bc0b0 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -319,9 +319,6 @@ bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot);
int mm_take_all_locks(struct mm_struct *mm);
void mm_drop_all_locks(struct mm_struct *mm);
-unsigned long count_vma_pages_range(struct mm_struct *mm,
- unsigned long addr, unsigned long end,
- unsigned long *nr_accounted);
static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma)
{