diff options
author | Huang Ying <ying.huang@intel.com> | 2017-05-03 14:52:49 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-05-03 15:52:08 -0700 |
commit | 322b8afe4a65906c133102532e63a278775cc5f0 (patch) | |
tree | 479940969c31f1bd766de491f62fbdceec70d066 /mm/internal.h | |
parent | 9a4caf1e9fa4864ce21ba9584a2c336bfbc72740 (diff) |
mm, swap: Fix a race in free_swap_and_cache()
Before using cluster lock in free_swap_and_cache(), the
swap_info_struct->lock will be held during freeing the swap entry and
acquiring page lock, so the page swap count will not change when testing
page information later. But after using cluster lock, the cluster lock
(or swap_info_struct->lock) will be held only during freeing the swap
entry. So before acquiring the page lock, the page swap count may be
changed in another thread. If the page swap count is not 0, we should
not delete the page from the swap cache. This is fixed via checking
page swap count again after acquiring the page lock.
I found the race when I review the code, so I didn't trigger the race
via a test program. If the race occurs for an anonymous page shared by
multiple processes via fork, multiple pages will be allocated and
swapped in from the swap device for the previously shared one page.
That is, the user-visible runtime effect is more memory will be used and
the access latency for the page will be higher, that is, the performance
regression.
Link: http://lkml.kernel.org/r/20170301143905.12846-1-ying.huang@intel.com
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Shaohua Li <shli@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Tim Chen <tim.c.chen@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/internal.h')
0 files changed, 0 insertions, 0 deletions