summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorRitesh Harjani (IBM) <ritesh.list@gmail.com>2024-11-13 19:49:54 +0530
committerAndrew Morton <akpm@linux-foundation.org>2024-11-14 22:49:19 -0800
commit2532e6c74a67e65b95f310946e0c0e0a41b3a34b (patch)
tree5e884677ee6df8ea21f441803a7452c4f3fbeae9 /mm
parent811808d365398680b628d2b88aafeba77c88691a (diff)
cma: enforce non-zero pageblock_order during cma_init_reserved_mem()
cma_init_reserved_mem() checks base and size alignment with CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during early boot when pageblock_order is 0. That means if base and size does not have pageblock_order alignment, it can cause functional failures during cma activate area. So let's enforce pageblock_order to be non-zero during cma_init_reserved_mem() to catch such wrong usages. 1. This was seen with fadump on PowerPC which was calling cma_init_reserved_mem() before the pageblock_order was initialized. This is now fixed in the fadump on PowerPC itself. The details of that can be found in the patch including the userspace-visible effect of the issue [1]. 2. However it was also decided that we should add a stronger enforcement check within cma_init_reserved_mem() to catch such wrong usages [2]. Hence this patch. This is ok to be in -next and there is no "Fixes" tag required for this patch. [1]: https://lore.kernel.org/all/3ae208e48c0d9cefe53d2dc4f593388067405b7d.1729146153.git.ritesh.list@gmail.com/ [2]: https://lore.kernel.org/all/83eb128e-4f06-4725-a843-a4563f246a44@redhat.com/ Link: https://lkml.kernel.org/r/e274344b44d5f80fa54c52f530387257fe99ec65.1731505681.git.ritesh.list@gmail.com Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/cma.c9
1 files changed, 9 insertions, 0 deletions
diff --git a/mm/cma.c b/mm/cma.c
index c5869d0001ad..de5bc0c81fc2 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -181,6 +181,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
if (!size || !memblock_is_region_reserved(base, size))
return -EINVAL;
+ /*
+ * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
+ * needs pageblock_order to be initialized. Let's enforce it.
+ */
+ if (!pageblock_order) {
+ pr_err("pageblock_order not yet initialized. Called during early boot?\n");
+ return -EINVAL;
+ }
+
/* ensure minimal alignment required by mm core */
if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
return -EINVAL;