diff options
author | Christoph Hellwig <hch@lst.de> | 2023-12-07 08:26:57 +0100 |
---|---|---|
committer | Christian Brauner <brauner@kernel.org> | 2024-02-01 14:20:06 +0100 |
commit | 7ea1d9b4a840c2dd01d1234663d4a8ef256cfe39 (patch) | |
tree | 6dc566c1a592494d2e5030e0f15c3b15bd71e2e9 /fs/iomap | |
parent | 6613476e225e090cc9aad49be7fa504e290dd33d (diff) |
iomap: clear the per-folio dirty bits on all writeback failures
write_cache_pages always clear the page dirty bit before calling into the
file systems, and leaves folios with a writeback failure without the
dirty bit after return. We also clear the per-block writeback bits for
writeback failures unless no I/O has submitted, which will leave the
folio in an inconsistent state where it doesn't have the folio dirty,
but one or more per-block dirty bits. This seems to be due the place
where the iomap_clear_range_dirty call was inserted into the existing
not very clearly structured code when adding per-block dirty bit support
and not actually intentional. Switch to always clearing the dirty on
writeback failure.
Fixes: 4ce02c679722 ("iomap: Add per-block dirty state tracking to improve performance")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20231207072710.176093-2-hch@lst.de
Signed-off-by: Christian Brauner <brauner@kernel.org>
Diffstat (limited to 'fs/iomap')
-rw-r--r-- | fs/iomap/buffered-io.c | 18 |
1 files changed, 11 insertions, 7 deletions
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 093c4515b22a..228fd2e05e12 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1833,16 +1833,10 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, if (unlikely(error)) { /* * Let the filesystem know what portion of the current page - * failed to map. If the page hasn't been added to ioend, it - * won't be affected by I/O completion and we must unlock it - * now. + * failed to map. */ if (wpc->ops->discard_folio) wpc->ops->discard_folio(folio, pos); - if (!count) { - folio_unlock(folio); - goto done; - } } /* @@ -1851,6 +1845,16 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * all the dirty bits in the folio here. */ iomap_clear_range_dirty(folio, 0, folio_size(folio)); + + /* + * If the page hasn't been added to the ioend, it won't be affected by + * I/O completion and we must unlock it now. + */ + if (error && !count) { + folio_unlock(folio); + goto done; + } + folio_start_writeback(folio); folio_unlock(folio); |