<feed xmlns='http://www.w3.org/2005/Atom'>
<title>pm24.git/drivers/mtd/ubi, branch rust-6.8</title>
<subtitle>Unnamed repository; edit this file 'description' to name the repository.
</subtitle>
<id>https://git.kobert.dev/pm24.git/atom?h=rust-6.8</id>
<link rel='self' href='https://git.kobert.dev/pm24.git/atom?h=rust-6.8'/>
<link rel='alternate' type='text/html' href='https://git.kobert.dev/pm24.git/'/>
<updated>2023-10-28T21:18:39Z</updated>
<entry>
<title>ubi: block: Fix use-after-free in ubiblock_cleanup</title>
<updated>2023-10-28T21:18:39Z</updated>
<author>
<name>ZhaoLong Wang</name>
<email>wangzhaolong1@huawei.com</email>
</author>
<published>2023-09-21T02:01:42Z</published>
<link rel='alternate' type='text/html' href='https://git.kobert.dev/pm24.git/commit/?id=d07cec9c238ae8fc6c1a9f3f5d30a2f8ec6cdc71'/>
<id>urn:sha1:d07cec9c238ae8fc6c1a9f3f5d30a2f8ec6cdc71</id>
<content type='text'>
The following BUG is reported when a ubiblock is removed:

 ==================================================================
 BUG: KASAN: slab-use-after-free in ubiblock_cleanup+0x88/0xa0 [ubi]
 Read of size 4 at addr ffff88810c8f3804 by task ubiblock/1716

 CPU: 5 PID: 1716 Comm: ubiblock Not tainted 6.6.0-rc2+ #135
 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190727_073836-buildvm-ppc64le-16.ppc.fedoraproject.org-3.fc31 04/01/2014
 Call Trace:
  &lt;TASK&gt;
  dump_stack_lvl+0x37/0x50
  print_report+0xd0/0x620
  kasan_report+0xb6/0xf0
  ubiblock_cleanup+0x88/0xa0 [ubi]
  ubiblock_remove+0x121/0x190 [ubi]
  vol_cdev_ioctl+0x355/0x630 [ubi]
  __x64_sys_ioctl+0xc7/0x100
  do_syscall_64+0x3f/0x90
  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
 RIP: 0033:0x7f08d7445577
 Code: b3 66 90 48 8b 05 11 89 2c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 &lt;48&gt; 3d 01 f0 ff ff 73 01 c3 48 8b 0d e1 8
 RSP: 002b:00007ffde05a3018 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
 RAX: ffffffffffffffda RBX: 00000000ffffffff RCX: 00007f08d7445577
 RDX: 0000000000000000 RSI: 0000000000004f08 RDI: 0000000000000003
 RBP: 0000000000816010 R08: 00000000008163a7 R09: 0000000000000000
 R10: 0000000000000003 R11: 0000000000000206 R12: 0000000000000003
 R13: 00007ffde05a3130 R14: 0000000000000000 R15: 0000000000000000
  &lt;/TASK&gt;

 Allocated by task 1715:
  kasan_save_stack+0x22/0x50
  kasan_set_track+0x25/0x30
  __kasan_kmalloc+0x7f/0x90
  __alloc_disk_node+0x40/0x2b0
  __blk_mq_alloc_disk+0x3e/0xb0
  ubiblock_create+0x2ba/0x620 [ubi]
  vol_cdev_ioctl+0x581/0x630 [ubi]
  __x64_sys_ioctl+0xc7/0x100
  do_syscall_64+0x3f/0x90
  entry_SYSCALL_64_after_hwframe+0x6e/0xd8

 Freed by task 0:
  kasan_save_stack+0x22/0x50
  kasan_set_track+0x25/0x30
  kasan_save_free_info+0x2b/0x50
  __kasan_slab_free+0x10e/0x190
  __kmem_cache_free+0x96/0x220
  bdev_free_inode+0xa4/0xf0
  rcu_core+0x496/0xec0
  __do_softirq+0xeb/0x384

 The buggy address belongs to the object at ffff88810c8f3800
  which belongs to the cache kmalloc-1k of size 1024
 The buggy address is located 4 bytes inside of
  freed 1024-byte region [ffff88810c8f3800, ffff88810c8f3c00)

 The buggy address belongs to the physical page:
 page:00000000d03de848 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10c8f0
 head:00000000d03de848 order:3 entire_mapcount:0 nr_pages_mapped:0 pincount:0
 flags: 0x200000000000840(slab|head|node=0|zone=2)
 page_type: 0xffffffff()
 raw: 0200000000000840 ffff888100042dc0 ffffea0004244400 dead000000000002
 raw: 0000000000000000 0000000080100010 00000001ffffffff 0000000000000000
 page dumped because: kasan: bad access detected

 Memory state around the buggy address:
  ffff88810c8f3700: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
  ffff88810c8f3780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
 &gt;ffff88810c8f3800: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                    ^
  ffff88810c8f3880: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
  ffff88810c8f3900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
 ==================================================================

Fix it by using a local variable to record the gendisk ID.

Fixes: 77567b25ab9f ("ubi: use blk_mq_alloc_disk and blk_cleanup_disk")
Signed-off-by: ZhaoLong Wang &lt;wangzhaolong1@huawei.com&gt;
Reviewed-by: Zhihao Cheng &lt;chengzhihao1@huawei.com&gt;
Signed-off-by: Richard Weinberger &lt;richard@nod.at&gt;
</content>
</entry>
<entry>
<title>ubi: fastmap: Add control in 'UBI_IOCATT' ioctl to reserve PEBs for filling pools</title>
<updated>2023-10-28T21:16:00Z</updated>
<author>
<name>Zhihao Cheng</name>
<email>chengzhihao1@huawei.com</email>
</author>
<published>2023-08-28T06:38:45Z</published>
<link rel='alternate' type='text/html' href='https://git.kobert.dev/pm24.git/commit/?id=ac085cfe57df2cc1d7a5c4c5e64b8780c8ad452f'/>
<id>urn:sha1:ac085cfe57df2cc1d7a5c4c5e64b8780c8ad452f</id>
<content type='text'>
This patch imports a new field 'need_resv_pool' in struct 'ubi_attach_req'
to control whether or not reserving free PEBs for filling pool/wl_pool.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng &lt;chengzhihao1@huawei.com&gt;
Signed-off-by: Richard Weinberger &lt;richard@nod.at&gt;
</content>
</entry>
<entry>
<title>ubi: fastmap: Add module parameter to control reserving filling pool PEBs</title>
<updated>2023-10-28T21:15:44Z</updated>
<author>
<name>Zhihao Cheng</name>
<email>chengzhihao1@huawei.com</email>
</author>
<published>2023-08-28T06:38:44Z</published>
<link rel='alternate' type='text/html' href='https://git.kobert.dev/pm24.git/commit/?id=d4c48e5b58f12835de779f5425ef741c4d0fb53e'/>
<id>urn:sha1:d4c48e5b58f12835de779f5425ef741c4d0fb53e</id>
<content type='text'>
Adding 6th module parameter in 'mtd=xxx' to control whether or not
reserving PEBs for filling pool/wl_pool.

Signed-off-by: Zhihao Cheng &lt;chengzhihao1@huawei.com&gt;
Signed-off-by: Richard Weinberger &lt;richard@nod.at&gt;
</content>
</entry>
<entry>
<title>ubi: fastmap: Fix lapsed wear leveling for first 64 PEBs</title>
<updated>2023-10-28T21:14:55Z</updated>
<author>
<name>Zhihao Cheng</name>
<email>chengzhihao1@huawei.com</email>
</author>
<published>2023-08-28T06:38:43Z</published>
<link rel='alternate' type='text/html' href='https://git.kobert.dev/pm24.git/commit/?id=90e0be56144b064be1a816cbdd184d2a8be7061b'/>
<id>urn:sha1:90e0be56144b064be1a816cbdd184d2a8be7061b</id>
<content type='text'>
The anchor PEB must be picked from first 64 PEBs, these PEBs could have
large erase counter greater than other PEBs especially when free space
is nearly running out.
The ubi_update_fastmap will be called as long as pool/wl_pool is empty,
old anchor PEB is erased when updating fastmap. Given an UBI device with
N PEBs, free PEBs is nearly running out and pool will be filled with 1
PEB every time ubi_update_fastmap invoked. So t=N/POOL_SIZE[1]/64 means
that in worst case the erase counter of first 64 PEBs is t times greater
than other PEBs in theory.
After running fsstress for 24h, the erase counter statistics for two UBI
devices shown as follow(CONFIG_MTD_UBI_WL_THRESHOLD=128):

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from              to     count      min      avg      max
---------------------------------------------------------
0        ..        9:        0        0        0        0
10       ..       99:        0        0        0        0
100      ..      999:        0        0        0        0
1000     ..     9999:        0        0        0        0
10000    ..    99999:      960    29224    29282    29362
100000   ..      inf:       64   117897   117934   117940
---------------------------------------------------------
Total               :     1024    29224    34822   117940

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from              to     count      min      avg      max
---------------------------------------------------------
0        ..        9:        0        0        0        0
10       ..       99:        0        0        0        0
100      ..      999:        0        0        0        0
1000     ..     9999:     8128     2253     2321     2387
10000    ..    99999:       64    35387    35387    35388
100000   ..      inf:        0        0        0        0
---------------------------------------------------------
Total               :     8192     2253     2579    35388

The key point is reducing fastmap updating frequency by enlarging
POOL_SIZE, so let UBI reserve ubi-&gt;fm_pool.max_size PEBs during
attaching. Then POOL_SIZE will become ubi-&gt;fm_pool.max_size/2 even
in free space running out case.
Given an UBI device with 8192 PEBs(16384\8192\4096 is common
large-capacity flash), t=8192/128/64=1. The fastmap updating will
happen in either wl_pool or pool is empty, so setting fm_pool_rsv_cnt
as ubi-&gt;fm_pool.max_size can fill wl_pool in full state.

After pool reservation, running fsstress for 24h:

Device A(1024 PEBs, pool=50, wl_pool=25):
=========================================================
from              to     count      min      avg      max
---------------------------------------------------------
0        ..        9:        0        0        0        0
10       ..       99:        0        0        0        0
100      ..      999:        0        0        0        0
1000     ..     9999:        0        0        0        0
10000    ..    99999:     1024    33801    33997    34056
100000   ..      inf:        0        0        0        0
---------------------------------------------------------
Total               :     1024    33801    33997    34056

Device B(8192 PEBs, pool=256, wl_pool=128):
=========================================================
from              to     count      min      avg      max
---------------------------------------------------------
0        ..        9:        0        0        0        0
10       ..       99:        0        0        0        0
100      ..      999:        0        0        0        0
1000     ..     9999:     8192     2205     2397     2460
10000    ..    99999:        0        0        0        0
100000   ..      inf:        0        0        0        0
---------------------------------------------------------
Total               :     8192     2205     2397     2460

The difference of erase counter between first 64 PEBs and others is
under WL_FREE_MAX_DIFF(2*UBI_WL_THRESHOLD=2*128=256).
  Device A: 34056 - 33801 = 255
  Device B: 2460 - 2205 = 255

Next patch will add a switch to control whether UBI needs to reserve
PEBs for filling pool.

Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng &lt;chengzhihao1@huawei.com&gt;
Signed-off-by: Richard Weinberger &lt;richard@nod.at&gt;
</content>
</entry>
<entry>
<title>ubi: fastmap: Get wl PEB even ec beyonds the 'max' if free PEBs are run out</title>
<updated>2023-10-28T21:07:54Z</updated>
<author>
<name>Zhihao Cheng</name>
<email>chengzhihao1@huawei.com</email>
</author>
<published>2023-08-28T06:38:42Z</published>
<link rel='alternate' type='text/html' href='https://git.kobert.dev/pm24.git/commit/?id=761893bd490b039258fda893c2a15fa59334c820'/>
<id>urn:sha1:761893bd490b039258fda893c2a15fa59334c820</id>
<content type='text'>
This is the part 2 to fix cyclically reusing single fastmap data PEBs.

Consider one situation, if there are four free PEBs for fm_anchor, pool,
wl_pool and fastmap data PEB with erase counter 100, 100, 100, 5096
(ubi-&gt;beb_rsvd_pebs is 0). PEB with erase counter 5096 is always picked
for fastmap data according to the realization of find_wl_entry(), since
fastmap data PEB is not scheduled for wl, finally there are two PEBs
(fm data) with great erase counter than other PEBS.
Get wl PEB even its erase counter exceeds the 'max' in find_wl_entry()
when free PEBs are run out after filling pools and fm data. Then the PEB
with biggest erase conter is taken as wl PEB, it can be scheduled for wl.

Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng &lt;chengzhihao1@huawei.com&gt;
Signed-off-by: Richard Weinberger &lt;richard@nod.at&gt;
</content>
</entry>
<entry>
<title>ubi: fastmap: may_reserve_for_fm: Don't reserve PEB if fm_anchor exists</title>
<updated>2023-10-28T20:49:14Z</updated>
<author>
<name>Zhihao Cheng</name>
<email>chengzhihao1@huawei.com</email>
</author>
<published>2023-08-28T06:38:41Z</published>
<link rel='alternate' type='text/html' href='https://git.kobert.dev/pm24.git/commit/?id=eada823e6a6fb856548f6e0c7a343cd5c41bb9d3'/>
<id>urn:sha1:eada823e6a6fb856548f6e0c7a343cd5c41bb9d3</id>
<content type='text'>
This is the part 1 to fix cyclically reusing single fastmap data PEBs.

After running fsstress on UBIFS for a while, UBI (16384 blocks, fastmap
takes 2 blocks) has an erase block(PEB: 8031) with big erase counter
greater than any other pebs:

=========================================================
from              to     count      min      avg      max
---------------------------------------------------------
0        ..        9:        0        0        0        0
10       ..       99:      532       84       92       99
100      ..      999:    15787      100      147      229
1000     ..     9999:       64     4699     4765     4826
10000    ..    99999:        0        0        0        0
100000   ..      inf:        1   272935   272935   272935
---------------------------------------------------------
Total               :    16384       84      180   272935

Not like fm_anchor, there is no candidate PEBs for fastmap data area,
so old fastmap data pebs will be reused after all free pebs are filled
into pool/wl_pool:
ubi_update_fastmap
 for (i = 1; i &lt; new_fm-&gt;used_blocks; i++)
  erase_block(ubi, old_fm-&gt;e[i]-&gt;pnum)
  new_fm-&gt;e[i] = old_fm-&gt;e[i]

According to wear leveling algorithm, UBI selects one small erase
counter PEB from ubi-&gt;used and one big erase counter PEB from wl_pool,
the reused fastmap data PEB is not in these trees. UBI won't schedule
this PEB for wl even it is in ubi-&gt;used because wl algorithm expects
small erase counter for used PEB.

Don't reserve PEB for fastmap in may_reserve_for_fm() if fm_anchor
already exists. Otherwise, when UBI is running out of free PEBs,
the only one free PEB (pnum &lt; 64) will be skipped and fastmap data
will be written on the same old PEB.

Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng &lt;chengzhihao1@huawei.com&gt;
Signed-off-by: Richard Weinberger &lt;richard@nod.at&gt;
</content>
</entry>
<entry>
<title>ubi: fastmap: Remove unneeded break condition while filling pools</title>
<updated>2023-10-28T20:47:43Z</updated>
<author>
<name>Zhihao Cheng</name>
<email>chengzhihao1@huawei.com</email>
</author>
<published>2023-08-28T06:38:40Z</published>
<link rel='alternate' type='text/html' href='https://git.kobert.dev/pm24.git/commit/?id=415e4723c4325464d489c1ef3335eb6679905471'/>
<id>urn:sha1:415e4723c4325464d489c1ef3335eb6679905471</id>
<content type='text'>
Change pool filling stop condition. Commit d09e9a2bddba ("ubi:
fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool
not empty") reserves fastmap data PEBs after filling 1 PEB in
wl_pool. Now wait_free_pebs_for_pool() makes enough free PEBs
before filling pool, there will still be at least 1 PEB in pool
and 1 PEB in wl_pool after doing ubi_refill_pools().

Signed-off-by: Zhihao Cheng &lt;chengzhihao1@huawei.com&gt;
Signed-off-by: Richard Weinberger &lt;richard@nod.at&gt;
</content>
</entry>
<entry>
<title>ubi: fastmap: Wait until there are enough free PEBs before filling pools</title>
<updated>2023-10-28T20:43:40Z</updated>
<author>
<name>Zhihao Cheng</name>
<email>chengzhihao1@huawei.com</email>
</author>
<published>2023-08-28T06:38:39Z</published>
<link rel='alternate' type='text/html' href='https://git.kobert.dev/pm24.git/commit/?id=a2ea69dac674df0fba59c66146a21145108a85ed'/>
<id>urn:sha1:a2ea69dac674df0fba59c66146a21145108a85ed</id>
<content type='text'>
Wait until there are enough free PEBs before filling pool/wl_pool,
sometimes erase_worker is not scheduled in time, which causes two
situations:
 A. There are few PEBs filled in pool, which makes ubi_update_fastmap
    is frequently called and leads first 64 PEBs are erased more times
    than other PEBs. So waiting free PEBs before filling pool reduces
    fastmap updating frequency and prolongs flash service life.
 B. In situation that space is nearly running out, ubi_refill_pools()
    cannot make sure pool and wl_pool are filled with free PEBs, caused
    by the delay of erase_worker. After this patch applied, there must
    exist free PEBs in pool after one call of ubi_update_fastmap.

Besides, this patch is a preparetion for fixing large erase counter in
fastmap data block and fixing lapsed wear leveling for first 64 PEBs.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng &lt;chengzhihao1@huawei.com&gt;
Signed-off-by: Richard Weinberger &lt;richard@nod.at&gt;
</content>
</entry>
<entry>
<title>ubi: fastmap: Use free pebs reserved for bad block handling</title>
<updated>2023-10-28T20:41:01Z</updated>
<author>
<name>Zhihao Cheng</name>
<email>chengzhihao1@huawei.com</email>
</author>
<published>2023-08-28T06:38:38Z</published>
<link rel='alternate' type='text/html' href='https://git.kobert.dev/pm24.git/commit/?id=8ff4e620ac93a5d332735e4f5a4ff31d80682b9a'/>
<id>urn:sha1:8ff4e620ac93a5d332735e4f5a4ff31d80682b9a</id>
<content type='text'>
If new bad PEBs occur, UBI firstly consumes ubi-&gt;beb_rsvd_pebs, and then
ubi-&gt;avail_pebs, finally UBI becomes read-only if above two items are 0,
which means that the amount of PEBs for user volumes is not effected.
Besides, UBI reserves count of free PBEs is ubi-&gt;beb_rsvd_pebs while
filling wl pool or getting free PEBs, but ubi-&gt;avail_pebs is not reserved.
So ubi-&gt;beb_rsvd_pebs and ubi-&gt;avail_pebs have nothing to do with the
usage of free PEBs, UBI can use all free PEBs.

Commit 78d6d497a648 ("UBI: Move fastmap specific functions out of wl.c")
has removed beb_rsvd_pebs checking while filling pool. Now, don't reserve
ubi-&gt;beb_rsvd_pebs while filling wl_pool. This will fill more PEBs in pool
and also reduce fastmap updating frequency.

Also remove beb_rsvd_pebs checking in ubi_wl_get_fm_peb.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787
Signed-off-by: Zhihao Cheng &lt;chengzhihao1@huawei.com&gt;
Signed-off-by: Richard Weinberger &lt;richard@nod.at&gt;
</content>
</entry>
<entry>
<title>ubi: Replace erase_block() with sync_erase()</title>
<updated>2023-10-28T20:34:24Z</updated>
<author>
<name>Zhihao Cheng</name>
<email>chengzhihao1@huawei.com</email>
</author>
<published>2023-08-28T06:38:37Z</published>
<link rel='alternate' type='text/html' href='https://git.kobert.dev/pm24.git/commit/?id=c19286d70aaa361cdb073a68a1f66232c359e2fd'/>
<id>urn:sha1:c19286d70aaa361cdb073a68a1f66232c359e2fd</id>
<content type='text'>
Since erase_block() has same logic with sync_erase(), just replace it
with sync_erase(), also rename 'sync_erase()' to 'ubi_sync_erase()'.

Signed-off-by: Zhihao Cheng &lt;chengzhihao1@huawei.com&gt;
Signed-off-by: Richard Weinberger &lt;richard@nod.at&gt;
</content>
</entry>
</feed>
