diff options
author | Mel Gorman <mgorman@techsingularity.net> | 2023-05-15 12:33:44 +0100 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-06-09 16:25:21 -0700 |
commit | 90ed667c03fe553a41d79057740ed5df951eead0 (patch) | |
tree | 9fcbf384a5702d2f85cfdd516bb099536927c044 /mm/compaction.c | |
parent | 590ccea80af950685de7f72ec43831765e5c8cb1 (diff) |
Revert "Revert "mm/compaction: fix set skip in fast_find_migrateblock""
This reverts commit 95e7a450b819 ("Revert "mm/compaction: fix set skip in
fast_find_migrateblock"").
Commit 7efc3b726103 ("mm/compaction: fix set skip in
fast_find_migrateblock") was reverted due to bug reports about khugepaged
consuming large amounts of CPU without making progress. The underlying
bug was partially fixed by commit cfccd2e63e7e ("mm, compaction: finish
pageblocks on complete migration failure") but it only mitigated the
problem and Vlastimil Babka pointing out the same issue could
theoretically happen to kcompactd.
As pageblocks containing pages that fail to migrate should now be forcibly
rescanned to set the skip hint if skip hints are used,
fast_find_migrateblock() should no longer loop on a small subset of
pageblocks for prolonged periods of time. Revert the revert so
fast_find_migrateblock() is effective again.
Using the mmtests config workload-usemem-stress-numa-compact, the number
of unique ranges scanned was analysed for both kcompactd and !kcompactd
activity.
6.4.0-rc1-vanilla
kcompactd
7 range=(0x10d600~0x10d800)
7 range=(0x110c00~0x110e00)
7 range=(0x110e00~0x111000)
7 range=(0x111800~0x111a00)
7 range=(0x111a00~0x111c00)
!kcompactd
1 range=(0x113e00~0x114000)
1 range=(0x114000~0x114020)
1 range=(0x114400~0x114489)
1 range=(0x114489~0x1144aa)
1 range=(0x1144aa~0x114600)
6.4.0-rc1-mm-revertfastmigrate
kcompactd
17 range=(0x104200~0x104400)
17 range=(0x104400~0x104600)
17 range=(0x104600~0x104800)
17 range=(0x104800~0x104a00)
17 range=(0x104a00~0x104c00)
!kcompactd
1793 range=(0x15c200~0x15c400)
5436 range=(0x105800~0x105a00)
19826 range=(0x150a00~0x150c00)
19833 range=(0x150800~0x150a00)
19834 range=(0x11ce00~0x11d000)
6.4.0-rc1-mm-follupfastfind
kcompactd
22 range=(0x107200~0x107400)
23 range=(0x107400~0x107600)
23 range=(0x107600~0x107800)
23 range=(0x107c00~0x107e00)
23 range=(0x107e00~0x108000)
!kcompactd
3 range=(0x890240~0x890400)
5 range=(0x886e00~0x887000)
5 range=(0x88a400~0x88a600)
6 range=(0x88f800~0x88fa00)
9 range=(0x88a400~0x88a420)
Note that the vanilla kernel and the full series had some duplication of
ranges scanned but it was not severe and would be in line with compaction
resets when the skip hints are cleared. Just a revert of commit
7efc3b726103 ("mm/compaction: fix set skip in fast_find_migrateblock")
showed excessive rescans of the same ranges so the series should not
reintroduce bug 1206848.
Link: https://bugzilla.suse.com/show_bug.cgi?id=1206848
Link: https://lkml.kernel.org/r/20230515113344.6869-5-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Tested-by: Raghavendra K T <raghavendra.kt@amd.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Chuyi Zhou <zhouchuyi@bytedance.com>
Cc: Jiri Slaby <jirislaby@kernel.org>
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Pedro Falcato <pedro.falcato@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/compaction.c')
-rw-r--r-- | mm/compaction.c | 1 |
1 files changed, 0 insertions, 1 deletions
diff --git a/mm/compaction.c b/mm/compaction.c index 02aa3788765d..f6465ae74d3f 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1866,7 +1866,6 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) pfn = cc->zone->zone_start_pfn; cc->fast_search_fail = 0; found_block = true; - set_pageblock_skip(freepage); break; } } |