summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)Author
2022-02-26kasan: test: prevent cache merging in kmem_cache_double_destroyAndrey Konovalov
With HW_TAGS KASAN and kasan.stacktrace=off, the cache created in the kmem_cache_double_destroy() test might get merged with an existing one. Thus, the first kmem_cache_destroy() call won't actually destroy it but will only decrease the refcount. This causes the test to fail. Provide an empty constructor for the created cache to prevent the cache from getting merged. Link: https://lkml.kernel.org/r/b597bd434c49591d8af00ee3993a42c609dc9a59.1644346040.git.andreyknvl@google.com Fixes: f98f966cd750 ("kasan: test: add test case for double-kmem_cache_destroy()") Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Marco Elver <elver@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-02-25list: test: Add a test for list_entry_is_head()David Gow
The list_entry_is_head() macro was added[1] after the list KUnit tests, so wasn't tested. Add a new KUnit test to complete the set. [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e130816164e244b692921de49771eeb28205152d Signed-off-by: David Gow <davidgow@google.com> Acked-by: Daniel Latypov <dlatypov@google.com> Acked-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-02-25list: test: Add a test for list_is_head()David Gow
list_is_head() was added recently[1], and didn't have a KUnit test. The implementation is trivial, so it's not a particularly exciting test, but it'd be nice to get back to full coverage of the list functions. [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/include/linux/list.h?id=0425473037db40d9e322631f2d4dc6ef51f97e88 Signed-off-by: David Gow <davidgow@google.com> Acked-by: Daniel Latypov <dlatypov@google.com> Acked-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-02-25list: test: Add test for list_del_init_careful()David Gow
The list_del_init_careful() function was added[1] after the list KUnit test. Add a very basic test to cover it. Note that this test only covers the single-threaded behaviour (which matches list_del_init()), as is already the case with the test for list_empty_careful(). [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c6fe44d96fc1536af5b11cd859686453d1b7bfd1 Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-02-25uaccess: remove CONFIG_SET_FSArnd Bergmann
There are no remaining callers of set_fs(), so CONFIG_SET_FS can be removed globally, along with the thread_info field and any references to it. This turns access_ok() into a cheaper check against TASK_SIZE_MAX. As CONFIG_SET_FS is now gone, drop all remaining references to set_fs()/get_fs(), mm_segment_t, user_addr_max() and uaccess_kernel(). Acked-by: Sam Ravnborg <sam@ravnborg.org> # for sparc32 changes Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Tested-by: Sergey Matyukevich <sergey.matyukevich@synopsys.com> # for arc changes Acked-by: Stafford Horne <shorne@gmail.com> # [openrisc, asm-generic] Acked-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-02-25lib/test_lockup: fix kernel pointer check for separate address spacesArnd Bergmann
test_kernel_ptr() uses access_ok() to figure out if a given address points to user space instead of kernel space. However on architectures that set CONFIG_ALTERNATE_USER_ADDRESS_SPACE, a pointer can be valid for both, and the check always fails because access_ok() returns true. Make the check for user space pointers conditional on the type of address space layout. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-02-25uaccess: fix type mismatch warnings from access_ok()Arnd Bergmann
On some architectures, access_ok() does not do any argument type checking, so replacing the definition with a generic one causes a few warnings for harmless issues that were never caught before. Fix the ones that I found either through my own test builds or that were reported by the 0-day bot. Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-02-24Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
tools/testing/selftests/net/mptcp/mptcp_join.sh 34aa6e3bccd8 ("selftests: mptcp: add ip mptcp wrappers") 857898eb4b28 ("selftests: mptcp: add missing join check") 6ef84b1517e0 ("selftests: mptcp: more robust signal race test") https://lore.kernel.org/all/20220221131842.468893-1-broonie@kernel.org/ drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/ct.c fb7e76ea3f3b6 ("net/mlx5e: TC, Skip redundant ct clear actions") c63741b426e11 ("net/mlx5e: Fix MPLSoUDP encap to use MPLS action information") 09bf97923224f ("net/mlx5e: TC, Move pedit_headers_action to parse_attr") 84ba8062e383 ("net/mlx5e: Test CT and SAMPLE on flow attr") efe6f961cd2e ("net/mlx5e: CT, Don't set flow flag CT for ct clear flow") 3b49a7edec1d ("net/mlx5e: TC, Reject rules with multiple CT actions") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-24vsprintf: Fix %pK with kptr_restrict == 0Christophe Leroy
Although kptr_restrict is set to 0 and the kernel is booted with no_hash_pointers parameter, the content of /proc/vmallocinfo is lacking the real addresses. / # cat /proc/vmallocinfo 0x(ptrval)-0x(ptrval) 8192 load_module+0xc0c/0x2c0c pages=1 vmalloc 0x(ptrval)-0x(ptrval) 12288 start_kernel+0x4e0/0x690 pages=2 vmalloc 0x(ptrval)-0x(ptrval) 12288 start_kernel+0x4e0/0x690 pages=2 vmalloc 0x(ptrval)-0x(ptrval) 8192 _mpic_map_mmio.constprop.0+0x20/0x44 phys=0x80041000 ioremap 0x(ptrval)-0x(ptrval) 12288 _mpic_map_mmio.constprop.0+0x20/0x44 phys=0x80041000 ioremap ... According to the documentation for /proc/sys/kernel/, %pK is equivalent to %p when kptr_restrict is set to 0. Fixes: 5ead723a20e0 ("lib/vsprintf: no_hash_pointers prints all addresses as unhashed") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/107476128e59bff11a309b5bf7579a1753a41aca.1645087605.git.christophe.leroy@csgroup.eu
2022-02-22Merge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds
Pull ITER_PIPE fix from Al Viro: "Fix for old sloppiness in pipe_buffer reuse" * 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: lib/iov_iter: initialize "flags" in new pipe_buffer
2022-02-21random: remove unused tracepointsJason A. Donenfeld
These explicit tracepoints aren't really used and show sign of aging. It's work to keep these up to date, and before I attempted to keep them up to date, they weren't up to date, which indicates that they're not really used. These days there are better ways of introspecting anyway. Cc: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-21lib/iov_iter: initialize "flags" in new pipe_bufferMax Kellermann
The functions copy_page_to_iter_pipe() and push_pipe() can both allocate a new pipe_buffer, but the "flags" member initializer is missing. Fixes: 241699cd72a8 ("new iov_iter flavour: pipe-backed") To: Alexander Viro <viro@zeniv.linux.org.uk> To: linux-fsdevel@vger.kernel.org To: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Max Kellermann <max.kellermann@ionos.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2022-02-21ARM: 9178/1: fix unmet dependency on BITREVERSE for HAVE_ARCH_BITREVERSEJulian Braha
Resending this to properly add it to the patch tracker - thanks for letting me know, Arnd :) When ARM is enabled, and BITREVERSE is disabled, Kbuild gives the following warning: WARNING: unmet direct dependencies detected for HAVE_ARCH_BITREVERSE Depends on [n]: BITREVERSE [=n] Selected by [y]: - ARM [=y] && (CPU_32v7M [=n] || CPU_32v7 [=y]) && !CPU_32v6 [=n] This is because ARM selects HAVE_ARCH_BITREVERSE without selecting BITREVERSE, despite HAVE_ARCH_BITREVERSE depending on BITREVERSE. This unmet dependency bug was found by Kismet, a static analysis tool for Kconfig. Please advise if this is not the appropriate solution. Signed-off-by: Julian Braha <julianbraha@gmail.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-02-16overflow: Provide constant expression struct_sizeKees Cook
There have been cases where struct_size() (or flex_array_size()) needs to be calculated for an initializer, which requires it be a constant expression. This is possible when the "count" argument is a constant expression, so provide this ability for the helpers. Cc: Gustavo A. R. Silva <gustavoars@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org> Tested-by: Gustavo A. R. Silva <gustavoars@kernel.org> Link: https://lore.kernel.org/lkml/20220210010407.GA701603@embeddedor
2022-02-16overflow: Implement size_t saturating arithmetic helpersKees Cook
In order to perform more open-coded replacements of common allocation size arithmetic, the kernel needs saturating (SIZE_MAX) helpers for multiplication, addition, and subtraction. For example, it is common in allocators, especially on realloc, to add to an existing size: p = krealloc(map->patch, sizeof(struct reg_sequence) * (map->patch_regs + num_regs), GFP_KERNEL); There is no existing saturating replacement for this calculation, and just leaving the addition open coded inside array_size() could potentially overflow as well. For example, an overflow in an expression for a size_t argument might wrap to zero: array_size(anything, something_at_size_max + 1) == 0 Introduce size_mul(), size_add(), and size_sub() helpers that implicitly promote arguments to size_t and saturated calculations for use in allocations. With these helpers it is also possible to redefine array_size(), array3_size(), flex_array_size(), and struct_size() in terms of the new helpers. As with the check_*_overflow() helpers, the new helpers use __must_check, though what is really desired is a way to make sure that assignment is only to a size_t lvalue. Without this, it's still possible to introduce overflow/underflow via type conversion (i.e. from size_t to int). Enforcing this will currently need to be left to static analysis or future use of -Wconversion. Additionally update the overflow unit tests to force runtime evaluation for the pathological cases. Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Gustavo A. R. Silva <gustavoars@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Leon Romanovsky <leon@kernel.org> Cc: Keith Busch <kbusch@kernel.org> Cc: Len Baker <len.baker@gmx.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2022-02-13fortify: Detect struct member overflows in memset() at compile-timeKees Cook
As done for memcpy(), also update memset() to use the same tightened compile-time bounds checking under CONFIG_FORTIFY_SOURCE. Signed-off-by: Kees Cook <keescook@chromium.org>
2022-02-13fortify: Detect struct member overflows in memmove() at compile-timeKees Cook
As done for memcpy(), also update memmove() to use the same tightened compile-time checks under CONFIG_FORTIFY_SOURCE. Signed-off-by: Kees Cook <keescook@chromium.org>
2022-02-13fortify: Detect struct member overflows in memcpy() at compile-timeKees Cook
memcpy() is dead; long live memcpy() tl;dr: In order to eliminate a large class of common buffer overflow flaws that continue to persist in the kernel, have memcpy() (under CONFIG_FORTIFY_SOURCE) perform bounds checking of the destination struct member when they have a known size. This would have caught all of the memcpy()-related buffer write overflow flaws identified in at least the last three years. Background and analysis: While stack-based buffer overflow flaws are largely mitigated by stack canaries (and similar) features, heap-based buffer overflow flaws continue to regularly appear in the kernel. Many classes of heap buffer overflows are mitigated by FORTIFY_SOURCE when using the strcpy() family of functions, but a significant number remain exposed through the memcpy() family of functions. At its core, FORTIFY_SOURCE uses the compiler's __builtin_object_size() internal[0] to determine the available size at a target address based on the compile-time known structure layout details. It operates in two modes: outer bounds (0) and inner bounds (1). In mode 0, the size of the enclosing structure is used. In mode 1, the size of the specific field is used. For example: struct object { u16 scalar1; /* 2 bytes */ char array[6]; /* 6 bytes */ u64 scalar2; /* 8 bytes */ u32 scalar3; /* 4 bytes */ u32 scalar4; /* 4 bytes */ } instance; __builtin_object_size(instance.array, 0) == 22, since the remaining size of the enclosing structure starting from "array" is 22 bytes (6 + 8 + 4 + 4). __builtin_object_size(instance.array, 1) == 6, since the remaining size of the specific field "array" is 6 bytes. The initial implementation of FORTIFY_SOURCE used mode 0 because there were many cases of both strcpy() and memcpy() functions being used to write (or read) across multiple fields in a structure. For example, it would catch this, which is writing 2 bytes beyond the end of "instance": memcpy(&instance.array, data, 25); While this didn't protect against overwriting adjacent fields in a given structure, it would at least stop overflows from reaching beyond the end of the structure into neighboring memory, and provided a meaningful mitigation of a subset of buffer overflow flaws. However, many desirable targets remain within the enclosing structure (for example function pointers). As it happened, there were very few cases of strcpy() family functions intentionally writing beyond the end of a string buffer. Once all known cases were removed from the kernel, the strcpy() family was tightened[1] to use mode 1, providing greater mitigation coverage. What remains is switching memcpy() to mode 1 as well, but making the switch is much more difficult because of how frustrating it can be to find existing "normal" uses of memcpy() that expect to write (or read) across multiple fields. The root cause of the problem is that the C language lacks a common pattern to indicate the intent of an author's use of memcpy(), and is further complicated by the available compile-time and run-time mitigation behaviors. The FORTIFY_SOURCE mitigation comes in two halves: the compile-time half, when both the buffer size _and_ the length of the copy is known, and the run-time half, when only the buffer size is known. If neither size is known, there is no bounds checking possible. At compile-time when the compiler sees that a length will always exceed a known buffer size, a warning can be deterministically emitted. For the run-time half, the length is tested against the known size of the buffer, and the overflowing operation is detected. (The performance overhead for these tests is virtually zero.) It is relatively easy to find compile-time false-positives since a warning is always generated. Fixing the false positives, however, can be very time-consuming as there are hundreds of instances. While it's possible some over-read conditions could lead to kernel memory exposures, the bulk of the risk comes from the run-time flaws where the length of a write may end up being attacker-controlled and lead to an overflow. Many of the compile-time false-positives take a form similar to this: memcpy(&instance.scalar2, data, sizeof(instance.scalar2) + sizeof(instance.scalar3)); and the run-time ones are similar, but lack a constant expression for the size of the copy: memcpy(instance.array, data, length); The former is meant to cover multiple fields (though its style has been frowned upon more recently), but has been technically legal. Both lack any expressivity in the C language about the author's _intent_ in a way that a compiler can check when the length isn't known at compile time. A comment doesn't work well because what's needed is something a compiler can directly reason about. Is a given memcpy() call expected to overflow into neighbors? Is it not? By using the new struct_group() macro, this intent can be much more easily encoded. It is not as easy to find the run-time false-positives since the code path to exercise a seemingly out-of-bounds condition that is actually expected may not be trivially reachable. Tightening the restrictions to block an operation for a false positive will either potentially create a greater flaw (if a copy is truncated by the mitigation), or destabilize the kernel (e.g. with a BUG()), making things completely useless for the end user. As a result, tightening the memcpy() restriction (when there is a reasonable level of uncertainty of the number of false positives), needs to first WARN() with no truncation. (Though any sufficiently paranoid end-user can always opt to set the panic_on_warn=1 sysctl.) Once enough development time has passed, the mitigation can be further intensified. (Note that this patch is only the compile-time checking step, which is a prerequisite to doing run-time checking, which will come in future patches.) Given the potential frustrations of weeding out all the false positives when tightening the run-time checks, it is reasonable to wonder if these changes would actually add meaningful protection. Looking at just the last three years, there are 23 identified flaws with a CVE that mention "buffer overflow", and 11 are memcpy()-related buffer overflows. (For the remaining 12: 7 are array index overflows that would be mitigated by systems built with CONFIG_UBSAN_BOUNDS=y: CVE-2019-0145, CVE-2019-14835, CVE-2019-14896, CVE-2019-14897, CVE-2019-14901, CVE-2019-17666, CVE-2021-28952. 2 are miscalculated allocation sizes which could be mitigated with memory tagging: CVE-2019-16746, CVE-2019-2181. 1 is an iovec buffer bug maybe mitigated by memory tagging: CVE-2020-10742. 1 is a type confusion bug mitigated by stack canaries: CVE-2020-10942. 1 is a string handling logic bug with no mitigation I'm aware of: CVE-2021-28972.) At my last count on an x86_64 allmodconfig build, there are 35,294 calls to memcpy(). With callers instrumented to report all places where the buffer size is known but the length remains unknown (i.e. a run-time bounds check is added), we can count how many new run-time bounds checks are added when the destination and source arguments of memcpy() are changed to use "mode 1" bounds checking: 1,276. This means for the future run-time checking, there is a worst-case upper bounds of 3.6% false positives to fix. In addition, there were around 150 new compile-time warnings to evaluate and fix (which have now been fixed). With this instrumentation it's also possible to compare the places where the known 11 memcpy() flaw overflows manifested against the resulting list of potential new run-time bounds checks, as a measure of potential efficacy of the tightened mitigation. Much to my surprise, horror, and delight, all 11 flaws would have been detected by the newly added run-time bounds checks, making this a distinctly clear mitigation improvement: 100% coverage for known memcpy() flaws, with a possible 2 orders of magnitude gain in coverage over existing but undiscovered run-time dynamic length flaws (i.e. 1265 newly covered sites in addition to the 11 known), against only <4% of all memcpy() callers maybe gaining a false positive run-time check, with only about 150 new compile-time instances needing evaluation. Specifically these would have been mitigated: CVE-2020-24490 https://git.kernel.org/linus/a2ec905d1e160a33b2e210e45ad30445ef26ce0e CVE-2020-12654 https://git.kernel.org/linus/3a9b153c5591548612c3955c9600a98150c81875 CVE-2020-12653 https://git.kernel.org/linus/b70261a288ea4d2f4ac7cd04be08a9f0f2de4f4d CVE-2019-14895 https://git.kernel.org/linus/3d94a4a8373bf5f45cf5f939e88b8354dbf2311b CVE-2019-14816 https://git.kernel.org/linus/7caac62ed598a196d6ddf8d9c121e12e082cac3a CVE-2019-14815 https://git.kernel.org/linus/7caac62ed598a196d6ddf8d9c121e12e082cac3a CVE-2019-14814 https://git.kernel.org/linus/7caac62ed598a196d6ddf8d9c121e12e082cac3a CVE-2019-10126 https://git.kernel.org/linus/69ae4f6aac1578575126319d3f55550e7e440449 CVE-2019-9500 https://git.kernel.org/linus/1b5e2423164b3670e8bc9174e4762d297990deff no-CVE-yet https://git.kernel.org/linus/130f634da1af649205f4a3dd86cbe5c126b57914 no-CVE-yet https://git.kernel.org/linus/d10a87a3535cce2b890897914f5d0d83df669c63 To accelerate the review of potential run-time false positives, it's also worth noting that it is possible to partially automate checking by examining the memcpy() buffer argument to check for the destination struct member having a neighboring array member. It is reasonable to expect that the vast majority of run-time false positives would look like the already evaluated and fixed compile-time false positives, where the most common pattern is neighboring arrays. (And, FWIW, many of the compile-time fixes were actual bugs, so it is reasonable to assume we'll have similar cases of actual bugs getting fixed for run-time checks.) Implementation: Tighten the memcpy() destination buffer size checking to use the actual ("mode 1") target buffer size as the bounds check instead of their enclosing structure's ("mode 0") size. Use a common inline for memcpy() (and memmove() in a following patch), since all the tests are the same. All new cross-field memcpy() uses must use the struct_group() macro or similar to target a specific range of fields, so that FORTIFY_SOURCE can reason about the size and safety of the copy. For now, cross-member "mode 1" _read_ detection at compile-time will be limited to W=1 builds, since it is, unfortunately, very common. As the priority is solving write overflows, read overflows will be part of a future phase (and can be fixed in parallel, for anyone wanting to look at W=1 build output). For run-time, the "mode 0" size checking and mitigation is left unchanged, with "mode 1" to be added in stages. In this patch, no new run-time checks are added. Future patches will first bounds-check writes, and only perform a WARN() for now. This way any missed run-time false positives can be flushed out over the coming several development cycles, but system builders who have tested their workloads to be WARN()-free can enable the panic_on_warn=1 sysctl to immediately gain a mitigation against this class of buffer overflows. Once that is under way, run-time bounds-checking of reads can be similarly enabled. Related classes of flaws that will remain unmitigated: - memcpy() with flexible array structures, as the compiler does not currently have visibility into the size of the trailing flexible array. These can be fixed in the future by refactoring such cases to use a new set of flexible array structure helpers to perform the common serialization/deserialization code patterns doing allocation and/or copying. - memcpy() with raw pointers (e.g. void *, char *, etc), or otherwise having their buffer size unknown at compile time, have no good mitigation beyond memory tagging (and even that would only protect against inter-object overflow, not intra-object neighboring field overflows), or refactoring. Some kind of "fat pointer" solution is likely needed to gain proper size-of-buffer awareness. (e.g. see struct membuf) - type confusion where a higher level type's allocation size does not match the resulting cast type eventually passed to a deeper memcpy() call where the compiler cannot see the true type. In theory, greater static analysis could catch these, and the use of -Warray-bounds will help find some of these. [0] https://gcc.gnu.org/onlinedocs/gcc/Object-Size-Checking.html [1] https://git.kernel.org/linus/6a39e62abbafd1d58d1722f40c7d26ef379c6a2f Signed-off-by: Kees Cook <keescook@chromium.org>
2022-02-10Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-10vsprintf: Move space out of string literals in fourcc_string()Andy Shevchenko
The literals "big-endian" and "little-endian" may be potentially occurred in other places. Dropping space allows linker to merge them by using only a single copy. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20220127181233.72910-2-andriy.shevchenko@linux.intel.com
2022-02-10vsprintf: Fix potential unaligned accessAndy Shevchenko
The %p4cc specifier in some cases might get an unaligned pointer. Due to this we need to make copy to local variable once to avoid potential crashes on some architectures due to improper access. Fixes: af612e43de6d ("lib/vsprintf: Add support for printing V4L2 and DRM fourccs") Cc: Sakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20220127181233.72910-1-andriy.shevchenko@linux.intel.com
2022-02-09Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski
Daniel Borkmann says: ==================== pull-request: bpf-next 2022-02-09 We've added 126 non-merge commits during the last 16 day(s) which contain a total of 201 files changed, 4049 insertions(+), 2215 deletions(-). The main changes are: 1) Add custom BPF allocator for JITs that pack multiple programs into a huge page to reduce iTLB pressure, from Song Liu. 2) Add __user tagging support in vmlinux BTF and utilize it from BPF verifier when generating loads, from Yonghong Song. 3) Add per-socket fast path check guarding from cgroup/BPF overhead when used by only some sockets, from Pavel Begunkov. 4) Continued libbpf deprecation work of APIs/features and removal of their usage from samples, selftests, libbpf & bpftool, from Andrii Nakryiko and various others. 5) Improve BPF instruction set documentation by adding byte swap instructions and cleaning up load/store section, from Christoph Hellwig. 6) Switch BPF preload infra to light skeleton and remove libbpf dependency from it, from Alexei Starovoitov. 7) Fix architecture-agnostic macros in libbpf for accessing syscall arguments from BPF progs for non-x86 architectures, from Ilya Leoshkevich. 8) Rework port members in struct bpf_sk_lookup and struct bpf_sock to be of 16-bit field with anonymous zero padding, from Jakub Sitnicki. 9) Add new bpf_copy_from_user_task() helper to read memory from a different task than current. Add ability to create sleepable BPF iterator progs, from Kenny Yu. 10) Implement XSK batching for ice's zero-copy driver used by AF_XDP and utilize TX batching API from XSK buffer pool, from Maciej Fijalkowski. 11) Generate temporary netns names for BPF selftests to avoid naming collisions, from Hangbin Liu. 12) Implement bpf_core_types_are_compat() with limited recursion for in-kernel usage, from Matteo Croce. 13) Simplify pahole version detection and finally enable CONFIG_DEBUG_INFO_DWARF5 to be selected with CONFIG_DEBUG_INFO_BTF, from Nathan Chancellor. 14) Misc minor fixes to libbpf and selftests from various folks. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (126 commits) selftests/bpf: Cover 4-byte load from remote_port in bpf_sk_lookup bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide libbpf: Fix compilation warning due to mismatched printf format selftests/bpf: Test BPF_KPROBE_SYSCALL macro libbpf: Add BPF_KPROBE_SYSCALL macro libbpf: Fix accessing the first syscall argument on s390 libbpf: Fix accessing the first syscall argument on arm64 libbpf: Allow overriding PT_REGS_PARM1{_CORE}_SYSCALL selftests/bpf: Skip test_bpf_syscall_macro's syscall_arg1 on arm64 and s390 libbpf: Fix accessing syscall arguments on riscv libbpf: Fix riscv register names libbpf: Fix accessing syscall arguments on powerpc selftests/bpf: Use PT_REGS_SYSCALL_REGS in bpf_syscall_macro libbpf: Add PT_REGS_SYSCALL_REGS macro selftests/bpf: Fix an endianness issue in bpf_syscall_macro test bpf: Fix bpf_prog_pack build HPAGE_PMD_SIZE bpf: Fix leftover header->pages in sparc and powerpc code. libbpf: Fix signedness bug in btf_dump_array_data() selftests/bpf: Do not export subtest as standalone test bpf, x86_64: Fail gracefully on bpf_jit_binary_pack_finalize failures ... ==================== Link: https://lore.kernel.org/r/20220209210050.8425-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-09test_overflow: Regularize test reporting outputKees Cook
Report test run summaries more regularly, so it's easier to understand the output: - Remove noisy "ok" reports for shift and allocator tests. - Reorganize per-type output to the end of each type's tests. - Replace redundant vmalloc tests with __vmalloc so that __GFP_NO_WARN can be used to keep the expected failure warnings out of dmesg, similar to commit 8e060c21ae2c ("lib/test_overflow.c: avoid tainting the kernel and fix wrap size") Resulting output: test_overflow: 18 u8 arithmetic tests finished test_overflow: 19 s8 arithmetic tests finished test_overflow: 17 u16 arithmetic tests finished test_overflow: 17 s16 arithmetic tests finished test_overflow: 17 u32 arithmetic tests finished test_overflow: 17 s32 arithmetic tests finished test_overflow: 17 u64 arithmetic tests finished test_overflow: 21 s64 arithmetic tests finished test_overflow: 113 shift tests finished test_overflow: 17 overflow size helper tests finished test_overflow: 11 allocation overflow tests finished test_overflow: all tests passed Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Link: https://lore.kernel.org/all/eb6d02ae-e2ed-e7bd-c700-8a6d004d84ce@rasmusvillemoes.dk/ Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/all/CAKwvOdnYYa+72VhtJ4ug=SJVFn7w+n7Th+hKYE87BRDt4hvqOg@mail.gmail.com/ Signed-off-by: Kees Cook <keescook@chromium.org>
2022-02-08cxl: Prove CXL lockingDan Williams
When CONFIG_PROVE_LOCKING is enabled the 'struct device' definition gets an additional mutex that is not clobbered by lockdep_set_novalidate_class() like the typical device_lock(). This allows for local annotation of subsystem locks with mutex_lock_nested() per the subsystem's object/lock hierarchy. For CXL, this primarily needs the ability to lock ports by depth and child objects of ports by their parent parent-port lock. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/164365853422.99383.1052399160445197427.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-02-08sbitmap: Delete old sbitmap_queue_get_shallow()John Garry
Since __sbitmap_queue_get_shallow() was introduced in commit c05e66733788 ("sbitmap: add sbitmap_get_shallow() operation"), it has not been used. Delete __sbitmap_queue_get_shallow() and rename public __sbitmap_queue_get_shallow() -> sbitmap_queue_get_shallow() as it is odd to have public __foo but no foo at all. Signed-off-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1644322024-105340-1-git-send-email-john.garry@huawei.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-02-08lib/sbitmap: kill 'depth' from sbitmap_wordMing Lei
Only the last sbitmap_word can have different depth, and all the others must have same depth of 1U << sb->shift, so not necessary to store it in sbitmap_word, and it can be retrieved easily and efficiently by adding one internal helper of __map_depth(sb, index). Remove 'depth' field from sbitmap_word, then the annotation of ____cacheline_aligned_in_smp for 'word' isn't needed any more. Not see performance effect when running high parallel IOPS test on null_blk. This way saves us one cacheline(usually 64 words) per each sbitmap_word. Cc: Martin Wilck <martin.wilck@suse.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Martin Wilck <mwilck@suse.com> Reviewed-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/20220110072945.347535-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-02-06ref_tracker: remove filter_irq_stacks() callEric Dumazet
After commit e94006608949 ("lib/stackdepot: always do filter_irq_stacks() in stack_depot_save()") it became unnecessary to filter the stack before calling stack_depot_save(). Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Marco Elver <elver@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-05ref_tracker: add a count of untracked referencesEric Dumazet
We are still chasing a netdev refcount imbalance, and we suspect we have one rogue dev_put() that is consuming a reference taken from a dev_hold_track() To detect this case, allow ref_tracker_alloc() and ref_tracker_free() to be called with a NULL @trackerp parameter, and use a dedicated refcount_t just for them. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-05ref_tracker: implement use-after-free detectionEric Dumazet
Whenever ref_tracker_dir_init() is called, mark the struct ref_tracker_dir as dead. Test the dead status from ref_tracker_alloc() and ref_tracker_free() This should detect buggy dev_put()/dev_hold() happening too late in netdevice dismantle process. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04lib/crypto: blake2s: avoid indirect calls to compression function for Clang CFIJason A. Donenfeld
blake2s_compress_generic is weakly aliased by blake2s_compress. The current harness for function selection uses a function pointer, which is ordinarily inlined and resolved at compile time. But when Clang's CFI is enabled, CFI still triggers when making an indirect call via a weak symbol. This seems like a bug in Clang's CFI, as though it's bucketing weak symbols and strong symbols differently. It also only seems to trigger when "full LTO" mode is used, rather than "thin LTO". [ 0.000000][ T0] Kernel panic - not syncing: CFI failure (target: blake2s_compress_generic+0x0/0x1444) [ 0.000000][ T0] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.16.0-mainline-06981-g076c855b846e #1 [ 0.000000][ T0] Hardware name: MT6873 (DT) [ 0.000000][ T0] Call trace: [ 0.000000][ T0] dump_backtrace+0xfc/0x1dc [ 0.000000][ T0] dump_stack_lvl+0xa8/0x11c [ 0.000000][ T0] panic+0x194/0x464 [ 0.000000][ T0] __cfi_check_fail+0x54/0x58 [ 0.000000][ T0] __cfi_slowpath_diag+0x354/0x4b0 [ 0.000000][ T0] blake2s_update+0x14c/0x178 [ 0.000000][ T0] _extract_entropy+0xf4/0x29c [ 0.000000][ T0] crng_initialize_primary+0x24/0x94 [ 0.000000][ T0] rand_initialize+0x2c/0x6c [ 0.000000][ T0] start_kernel+0x2f8/0x65c [ 0.000000][ T0] __primary_switched+0xc4/0x7be4 [ 0.000000][ T0] Rebooting in 5 seconds.. Nonetheless, the function pointer method isn't so terrific anyway, so this patch replaces it with a simple boolean, which also gets inlined away. This successfully works around the Clang bug. In general, I'm not too keen on all of the indirection involved here; it clearly does more harm than good. Hopefully the whole thing can get cleaned up down the road when lib/crypto is overhauled more comprehensively. But for now, we go with a simple bandaid. Fixes: 6048fdcc5f26 ("lib/crypto: blake2s: include as built-in") Link: https://github.com/ClangBuiltLinux/linux/issues/1567 Reported-by: Miles Chen <miles.chen@mediatek.com> Tested-by: Miles Chen <miles.chen@mediatek.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Tested-by: John Stultz <john.stultz@linaro.org> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-02lib/Kconfig.debug: Allow BTF + DWARF5 with pahole 1.21+Nathan Chancellor
Commit 98cd6f521f10 ("Kconfig: allow explicit opt in to DWARF v5") prevented CONFIG_DEBUG_INFO_DWARF5 from being selected when CONFIG_DEBUG_INFO_BTF is enabled because pahole had issues with clang's DWARF5 info. This was resolved by [1], which is in pahole v1.21. Allow DEBUG_INFO_DWARF5 to be selected with DEBUG_INFO_BTF when using pahole v1.21 or newer. [1]: https://git.kernel.org/pub/scm/devel/pahole/pahole.git/commit/?id=7d8e829f636f47aba2e1b6eda57e74d8e31f733c Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-6-nathan@kernel.org
2022-02-02lib/Kconfig.debug: Use CONFIG_PAHOLE_VERSIONNathan Chancellor
Now that CONFIG_PAHOLE_VERSION exists, use it in the definition of CONFIG_PAHOLE_HAS_SPLIT_BTF and CONFIG_PAHOLE_HAS_BTF_TAG to reduce the amount of duplication across the tree. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-5-nathan@kernel.org
2022-01-31kunit: factor out str constants from binary assertion structsDaniel Latypov
If the compiler doesn't optimize them away, each kunit assertion (use of KUNIT_EXPECT_EQ, etc.) can use 88 bytes of stack space in the worst and most common case. This has led to compiler warnings and a suggestion from Linus to move data from the structs into static const's where possible [1]. This builds upon [2] which did so for the base struct kunit_assert type. That only reduced sizeof(struct kunit_binary_assert) from 88 to 64. Given these are by far the most commonly used asserts, this patch factors out the textual representations of the operands and comparator into another static const, saving 16 more bytes. In detail, KUNIT_EXPECT_EQ(test, 2 + 2, 5) yields the following struct (struct kunit_binary_assert) { .assert = <struct kunit_assert>, .operation = "==", .left_text = "2 + 2", .left_value = 4, .right_text = "5", .right_value = 5, } After this change static const struct kunit_binary_assert_text __text = { .operation = "==", .left_text = "2 + 2", .right_text = "5", }; (struct kunit_binary_assert) { .assert = <struct kunit_assert>, .text = &__text, .left_value = 4, .right_value = 5, } This also DRYs the code a bit more since these str fields were repeated for the string and pointer versions of kunit_binary_assert. Note: we could name the kunit_binary_assert_text fields left/right instead of left_text/right_text. But that would require changing the macros a bit since they have args called "left" and "right" which would be substituted in `.left = #left` as `.2 + 2 = \"2 + 2\"`. [1] https://groups.google.com/g/kunit-dev/c/i3fZXgvBrfA/m/VULQg1z6BAAJ [2] https://lore.kernel.org/linux-kselftest/20220113165931.451305-6-dlatypov@google.com/ Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-01-31kunit: remove va_format from kunit_assertDaniel Latypov
The concern is that having a lot of redundant fields in kunit_assert can blow up stack usage if the compiler doesn't optimize them away [1]. The comment on this field implies that it was meant to be initialized when the expect/assert was declared, but this only happens when we run kunit_do_failed_assertion(). We don't need to access it outside of that function, so move it out of the struct and make it a local variable there. This change also takes the chance to reduce the number of macros by inlining the now simplified KUNIT_INIT_ASSERT_STRUCT() macro. [1] https://groups.google.com/g/kunit-dev/c/i3fZXgvBrfA/m/VULQg1z6BAAJ Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-01-31lib/crc32test: correct printed bytes countKevin Bracey
crc32c_le self test had a stray multiply by two inherited from the crc32_le+crc32_be test loop. Signed-off-by: Kevin Bracey <kevin@bracey.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31lib/crc32: Make crc32_be weak for arch overrideKevin Bracey
crc32_le and __crc32c_le can be overridden - extend this to crc32_be. Signed-off-by: Kevin Bracey <kevin@bracey.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31lib/crc32: remove unneeded castsKevin Bracey
Casts were added in commit 8f243af42ade ("sections: fix const sections for crc32 table") to cope with the tables not being const. They are no longer required since commit f5e38b9284e1 ("lib: crc32: constify crc32 lookup table"). Signed-off-by: Kevin Bracey <kevin@bracey.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-30kasan: test: fix compatibility with FORTIFY_SOURCEMarco Elver
With CONFIG_FORTIFY_SOURCE enabled, string functions will also perform dynamic checks using __builtin_object_size(ptr), which when failed will panic the kernel. Because the KASAN test deliberately performs out-of-bounds operations, the kernel panics with FORTIFY_SOURCE, for example: | kernel BUG at lib/string_helpers.c:910! | invalid opcode: 0000 [#1] PREEMPT SMP KASAN PTI | CPU: 1 PID: 137 Comm: kunit_try_catch Tainted: G B 5.16.0-rc3+ #3 | Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 | RIP: 0010:fortify_panic+0x19/0x1b | ... | Call Trace: | kmalloc_oob_in_memset.cold+0x16/0x16 | ... Fix it by also hiding `ptr` from the optimizer, which will ensure that __builtin_object_size() does not return a valid size, preventing fortified string functions from panicking. Link: https://lkml.kernel.org/r/20220124160744.1244685-1-elver@google.com Signed-off-by: Marco Elver <elver@google.com> Reported-by: Nico Pache <npache@redhat.com> Reviewed-by: Nico Pache <npache@redhat.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-01-28crypto: sm3 - create SM3 stand-alone libraryTianjia Zhang
Stand-alone implementation of the SM3 algorithm. It is designed to have as little dependencies as possible. In other cases you should generally use the hash APIs from include/crypto/hash.h. Especially when hashing large amounts of data as those APIs may be hw-accelerated. In the new SM3 stand-alone library, sm3_transform() has also been optimized, instead of simply using the code in sm3_generic. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Reviewed-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-27compiler_types: define __user as __attribute__((btf_type_tag("user")))Yonghong Song
The __user attribute is currently mainly used by sparse for type checking. The attribute indicates whether a memory access is in user memory address space or not. Such information is important during tracing kernel internal functions or data structures as accessing user memory often has different mechanisms compared to accessing kernel memory. For example, the perf-probe needs explicit command line specification to indicate a particular argument or string in user-space memory ([1], [2], [3]). Currently, vmlinux BTF is available in kernel with many distributions. If __user attribute information is available in vmlinux BTF, the explicit user memory access information from users will not be necessary as the kernel can figure it out by itself with vmlinux BTF. Besides the above possible use for perf/probe, another use case is for bpf verifier. Currently, for bpf BPF_PROG_TYPE_TRACING type of bpf programs, users can write direct code like p->m1->m2 and "p" could be a function parameter. Without __user information in BTF, the verifier will assume p->m1 accessing kernel memory and will generate normal loads. Let us say "p" actually tagged with __user in the source code. In such cases, p->m1 is actually accessing user memory and direct load is not right and may produce incorrect result. For such cases, bpf_probe_read_user() will be the correct way to read p->m1. To support encoding __user information in BTF, a new attribute __attribute__((btf_type_tag("<arbitrary_string>"))) is implemented in clang ([4]). For example, if we have #define __user __attribute__((btf_type_tag("user"))) during kernel compilation, the attribute "user" information will be preserved in dwarf. After pahole converting dwarf to BTF, __user information will be available in vmlinux BTF. The following is an example with latest upstream clang (clang14) and pahole 1.23: [$ ~] cat test.c #define __user __attribute__((btf_type_tag("user"))) int foo(int __user *arg) { return *arg; } [$ ~] clang -O2 -g -c test.c [$ ~] pahole -JV test.o ... [1] INT int size=4 nr_bits=32 encoding=SIGNED [2] TYPE_TAG user type_id=1 [3] PTR (anon) type_id=2 [4] FUNC_PROTO (anon) return=1 args=(3 arg) [5] FUNC foo type_id=4 [$ ~] You can see for the function argument "int __user *arg", its type is described as PTR -> TYPE_TAG(user) -> INT The kernel can use this information for bpf verification or other use cases. Current btf_type_tag is only supported in clang (>= clang14) and pahole (>= 1.23). gcc support is also proposed and under development ([5]). [1] http://lkml.kernel.org/r/155789874562.26965.10836126971405890891.stgit@devnote2 [2] http://lkml.kernel.org/r/155789872187.26965.4468456816590888687.stgit@devnote2 [3] http://lkml.kernel.org/r/155789871009.26965.14167558859557329331.stgit@devnote2 [4] https://reviews.llvm.org/D111199 [5] https://lore.kernel.org/bpf/0cbeb2fb-1a18-f690-e360-24b1c90c2a91@fb.com/ Signed-off-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20220127154600.652613-1-yhs@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-27blk-mq: Fix wrong wakeup batch configuration which will cause hangLaibin Qiu
Commit 180dccb0dba4f ("blk-mq: fix tag_get wait task can't be awakened") will recalculate wake_batch when incrementing or decrementing active_queues to avoid wake_batch > hctx_max_depth. At the same time, in order to not affect performance as much as possible, the minimum wakeup batch is set to 4. But when the QD is small (such as QD=1), if inc or dec active_queues increases wakeup batch, that can lead to a hang: Fix this problem with the following strategies: QD : >= 32 | < 32 --------------------------------- wakeup batch: 8~4 | 3~1 Fixes: 180dccb0dba4f ("blk-mq: fix tag_get wait task can't be awakened") Link: https://lore.kernel.org/linux-block/78cafe94-a787-e006-8851-69906f0c2128@huawei.com/T/#t Reported-by: Alex Xu (Hello71) <alex_y_xu@yahoo.ca> Signed-off-by: Laibin Qiu <qiulaibin@huawei.com> Tested-by: Alex Xu (Hello71) <alex_y_xu@yahoo.ca> Link: https://lore.kernel.org/r/20220127100047.1763746-1-qiulaibin@huawei.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-01-25kunit: split out part of kunit_assert into a static constDaniel Latypov
This is per Linus's suggestion in [1]. The issue there is that every KUNIT_EXPECT/KUNIT_ASSERT puts a kunit_assert object onto the stack. Normally we rely on compilers to elide this, but when that doesn't work out, this blows up the stack usage of kunit test functions. We can move some data off the stack by making it static. This change introduces a new `struct kunit_loc` to hold the file and line number and then just passing assert_type (EXPECT or ASSERT) as an argument. In [1], it was suggested to also move out the format string as well, but users could theoretically craft a format string at runtime, so we can't. This change leaves a copy of `assert_type` in kunit_assert for now because cleaning up all the macros to not pass it around is a bit more involved. Here's an example of the expanded code for KUNIT_FAIL(): if (__builtin_expect(!!(!(false)), 0)) { static const struct kunit_loc loc = { .file = ... }; struct kunit_fail_assert __assertion = { .assert = { .type ... }; kunit_do_failed_assertion(test, &loc, KUNIT_EXPECTATION, &__assertion.assert, ...); }; [1] https://groups.google.com/g/kunit-dev/c/i3fZXgvBrfA/m/VULQg1z6BAAJ Signed-off-by: Daniel Latypov <dlatypov@google.com> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-01-25kunit: factor out kunit_base_assert_format() call into kunit_fail()Daniel Latypov
We call this function first thing for all the assertion `format()` functions. This is the part that prints the file and line number and assertion type (EXPECTATION, ASSERTION). Having it as part of the format functions lets us have the flexibility to not print that information (or print it differently) for new assertion types, but I think this we don't need that. And in the future, we'd like to consider factoring that data (file, line#, type) out of the kunit_assert struct and into a `static` variable, as Linus suggested [1], so we'd need to extract it anyways. [1] https://groups.google.com/g/kunit-dev/c/i3fZXgvBrfA/m/VULQg1z6BAAJ Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-01-25kunit: move check if assertion passed into the macrosDaniel Latypov
Currently the code always calls kunit_do_assertion() even though it does nothing when `pass` is true. This change moves the `if(!(pass))` check into the macro instead and renames the function to kunit_do_failed_assertion(). I feel this a bit easier to read and understand. This has the potential upside of avoiding a function call that does nothing most of the time (assuming your tests are passing) but comes with the downside of generating a bit more code and branches. We try to mitigate the branches by tagging them with `unlikely()`. This also means we don't have to initialize structs that we don't need, which will become a tiny bit more expensive if we switch over to using static variables to try and reduce stack usage. (There's runtime code to check if the variable has been initialized yet or not). Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-01-25kunit: add example test case showing off all the expect macrosDaniel Latypov
Currently, these macros are only really documented near the bottom of https://www.kernel.org/doc/html/latest/dev-tools/kunit/api/test.html#c.KUNIT_FAIL. E.g. it's likely someone might just not realize that KUNIT_EXPECT_STREQ() exists and instead use KUNIT_EXPECT_FALSE(strcmp()) or similar. This can also serve as a basic smoketest that the KUnit assert machinery still works for all the macros. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-01-23Merge tag 'bitmap-5.17-rc1' of git://github.com/norov/linuxLinus Torvalds
Pull bitmap updates from Yury Norov: - introduce for_each_set_bitrange() - use find_first_*_bit() instead of find_next_*_bit() where possible - unify for_each_bit() macros * tag 'bitmap-5.17-rc1' of git://github.com/norov/linux: vsprintf: rework bitmap_list_string lib: bitmap: add performance test for bitmap_print_to_pagebuf bitmap: unify find_bit operations mm/percpu: micro-optimize pcpu_is_populated() Replace for_each_*_bit_from() with for_each_*_bit() where appropriate find: micro-optimize for_each_{set,clear}_bit() include/linux: move for_each_bit() macros from bitops.h to find.h cpumask: replace cpumask_next_* with cpumask_first_* where appropriate tools: sync tools/bitmap with mother linux all: replace find_next{,_zero}_bit with find_first{,_zero}_bit where appropriate cpumask: use find_first_and_bit() lib: add find_first_and_bit() arch: remove GENERIC_FIND_FIRST_BIT entirely include: move find.h from asm_generic to linux bitops: move find_bit_*_le functions from le.h to find.h bitops: protect find_first_{,zero}_bit properly
2022-01-22lib/stackdepot: always do filter_irq_stacks() in stack_depot_save()Marco Elver
The non-interrupt portion of interrupt stack traces before interrupt entry is usually arbitrary. Therefore, saving stack traces of interrupts (that include entries before interrupt entry) to stack depot leads to unbounded stackdepot growth. As such, use of filter_irq_stacks() is a requirement to ensure stackdepot can efficiently deduplicate interrupt stacks. Looking through all current users of stack_depot_save(), none (except KASAN) pass the stack trace through filter_irq_stacks() before passing it on to stack_depot_save(). Rather than adding filter_irq_stacks() to all current users of stack_depot_save(), it became clear that stack_depot_save() should simply do filter_irq_stacks(). Link: https://lkml.kernel.org/r/20211130095727.2378739-1-elver@google.com Signed-off-by: Marco Elver <elver@google.com> Reviewed-by: Alexander Potapenko <glider@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Vijayanand Jitta <vjitta@codeaurora.org> Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org> Cc: Imran Khan <imran.f.khan@oracle.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-01-22lib/stackdepot: allow optional init and stack_table allocation by kvmalloc()Vlastimil Babka
Currently, enabling CONFIG_STACKDEPOT means its stack_table will be allocated from memblock, even if stack depot ends up not actually used. The default size of stack_table is 4MB on 32-bit, 8MB on 64-bit. This is fine for use-cases such as KASAN which is also a config option and has overhead on its own. But it's an issue for functionality that has to be actually enabled on boot (page_owner) or depends on hardware (GPU drivers) and thus the memory might be wasted. This was raised as an issue [1] when attempting to add stackdepot support for SLUB's debug object tracking functionality. It's common to build kernels with CONFIG_SLUB_DEBUG and enable slub_debug on boot only when needed, or create only specific kmem caches with debugging for testing purposes. It would thus be more efficient if stackdepot's table was allocated only when actually going to be used. This patch thus makes the allocation (and whole stack_depot_init() call) optional: - Add a CONFIG_STACKDEPOT_ALWAYS_INIT flag to keep using the current well-defined point of allocation as part of mem_init(). Make CONFIG_KASAN select this flag. - Other users have to call stack_depot_init() as part of their own init when it's determined that stack depot will actually be used. This may depend on both config and runtime conditions. Convert current users which are page_owner and several in the DRM subsystem. Same will be done for SLUB later. - Because the init might now be called after the boot-time memblock allocation has given all memory to the buddy allocator, change stack_depot_init() to allocate stack_table with kvmalloc() when memblock is no longer available. Also handle allocation failure by disabling stackdepot (could have theoretically happened even with memblock allocation previously), and don't unnecessarily align the memblock allocation to its own size anymore. [1] https://lore.kernel.org/all/CAMuHMdW=eoVzM1Re5FVoEN87nKfiLmM2+Ah7eNu2KXEhCvbZyA@mail.gmail.com/ Link: https://lkml.kernel.org/r/20211013073005.11351-1-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Dmitry Vyukov <dvyukov@google.com> Reviewed-by: Marco Elver <elver@google.com> # stackdepot Cc: Marco Elver <elver@google.com> Cc: Vijayanand Jitta <vjitta@codeaurora.org> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <mripard@kernel.org> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Oliver Glitta <glittao@gmail.com> Cc: Imran Khan <imran.f.khan@oracle.com> From: Colin Ian King <colin.king@canonical.com> Subject: lib/stackdepot: fix spelling mistake and grammar in pr_err message There is a spelling mistake of the work allocation so fix this and re-phrase the message to make it easier to read. Link: https://lkml.kernel.org/r/20211015104159.11282-1-colin.king@canonical.com Signed-off-by: Colin Ian King <colin.king@canonical.com> Cc: Vlastimil Babka <vbabka@suse.cz> From: Vlastimil Babka <vbabka@suse.cz> Subject: lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup On FLATMEM, we call page_ext_init_flatmem_late() just before kmem_cache_init() which means stack_depot_init() (called by page owner init) will not recognize properly it should use kvmalloc() and not memblock_alloc(). memblock_alloc() will also not issue a warning and return a block memory that can be invalid and cause kernel page fault when saving stacks, as reported by the kernel test robot [1]. Fix this by moving page_ext_init_flatmem_late() below kmem_cache_init() so that slab_is_available() is true during stack_depot_init(). SPARSEMEM doesn't have this issue, as it doesn't do page_ext_init_flatmem_late(), but a different page_ext_init() even later in the boot process. Thanks to Mike Rapoport for pointing out the FLATMEM init ordering issue. While at it, also actually resolve a checkpatch warning in stack_depot_init() from DRM CI, which was supposed to be in the original patch already. [1] https://lore.kernel.org/all/20211014085450.GC18719@xsang-OptiPlex-9020/ Link: https://lkml.kernel.org/r/6abd9213-19a9-6d58-cedc-2414386d2d81@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Reported-by: kernel test robot <oliver.sang@intel.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> From: Vlastimil Babka <vbabka@suse.cz> Subject: lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup3 Due to cd06ab2fd48f ("drm/locking: add backtrace for locking contended locks without backoff") landing recently to -next adding a new stack depot user in drivers/gpu/drm/drm_modeset_lock.c we need to add an appropriate call to stack_depot_init() there as well. Link: https://lkml.kernel.org/r/2a692365-cfa1-64f2-34e0-8aa5674dce5e@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Naresh Kamboju <naresh.kamboju@linaro.org> Cc: Marco Elver <elver@google.com> Cc: Vijayanand Jitta <vjitta@codeaurora.org> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <mripard@kernel.org> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Oliver Glitta <glittao@gmail.com> Cc: Imran Khan <imran.f.khan@oracle.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> From: Vlastimil Babka <vbabka@suse.cz> Subject: lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup4 Due to 4e66934eaadc ("lib: add reference counting tracking infrastructure") landing recently to net-next adding a new stack depot user in lib/ref_tracker.c we need to add an appropriate call to stack_depot_init() there as well. Link: https://lkml.kernel.org/r/45c1b738-1a2f-5b5f-2f6d-86fab206d01c@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Eric Dumazet <edumazet@google.com> Cc: Jiri Slab <jirislaby@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-01-22test_sysctl: simplify subdirectory registration with register_sysctl()Luis Chamberlain
There is no need to user boiler plate code to specify a set of base directories we're going to stuff sysctls under. Simplify this by using register_sysctl() and specifying the directory path directly. // pycocci sysctl-subdir-register-sysctl-simplify.cocci lib/test_sysctl.c @c1@ expression E1; identifier subdir, sysctls; @@ static struct ctl_table subdir[] = { { .procname = E1, .maxlen = 0, .mode = 0555, .child = sysctls, }, { } }; @c2@ identifier c1.subdir; expression E2; identifier base; @@ static struct ctl_table base[] = { { .procname = E2, .maxlen = 0, .mode = 0555, .child = subdir, }, { } }; @c3@ identifier c2.base; identifier header; @@ header = register_sysctl_table(base); @r1 depends on c1 && c2 && c3@ expression c1.E1; identifier c1.subdir, c1.sysctls; @@ -static struct ctl_table subdir[] = { - { - .procname = E1, - .maxlen = 0, - .mode = 0555, - .child = sysctls, - }, - { } -}; @r2 depends on c1 && c2 && c3@ identifier c1.subdir; expression c2.E2; identifier c2.base; @@ -static struct ctl_table base[] = { - { - .procname = E2, - .maxlen = 0, - .mode = 0555, - .child = subdir, - }, - { } -}; @initialize:python@ @@ def make_my_fresh_expression(s1, s2): return '"' + s1.strip('"') + "/" + s2.strip('"') + '"' @r3 depends on c1 && c2 && c3@ expression c1.E1; identifier c1.sysctls; expression c2.E2; identifier c2.base; identifier c3.header; fresh identifier E3 = script:python(E2, E1) { make_my_fresh_expression(E2, E1) }; @@ header = -register_sysctl_table(base); +register_sysctl(E3, sysctls); Generated-by: Coccinelle SmPL Link: https://lkml.kernel.org/r/20211123202422.819032-6-mcgrof@kernel.org Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Amir Goldstein <amir73il@gmail.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Antti Palosaari <crope@iki.fi> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Clemens Ladisch <clemens@ladisch.de> Cc: David Airlie <airlied@linux.ie> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Iurii Zaikin <yzaikin@google.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Jan Kara <jack@suse.cz> Cc: Joel Becker <jlbec@evilplan.org> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Joseph Qi <joseph.qi@linux.alibaba.com> Cc: Julia Lawall <julia.lawall@inria.fr> Cc: Kees Cook <keescook@chromium.org> Cc: Lukas Middendorf <kernel@tuxforce.de> Cc: Mark Fasheh <mark@fasheh.com> Cc: Paul Turner <pjt@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Phillip Potter <phil@philpotter.co.uk> Cc: Qing Wang <wangqing@vivo.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Sebastian Reichel <sre@kernel.org> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Stephen Kitt <steve@sk2.org> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Xiaoming Ni <nixiaoming@huawei.com> Cc: Douglas Gilbert <dgilbert@interlog.com> Cc: James E.J. Bottomley <jejb@linux.ibm.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: John Ogness <john.ogness@linutronix.de> Cc: Martin K. Petersen <martin.petersen@oracle.com> Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-01-21Merge tag 'block-5.17-2022-01-21' of git://git.kernel.dk/linux-blockLinus Torvalds
Pull block fixes from Jens Axboe: "Various little minor fixes that should go into this release: - Fix issue with cloned bios and IO accounting (Christoph) - Remove redundant assignments (Colin, GuoYong) - Fix an issue with the mq-deadline async_depth sysfs interface (me) - Fix brd module loading race (Tetsuo) - Shared tag map wakeup fix (Laibin) - End of bdev read fix (OGAWA) - srcu leak fix (Ming)" * tag 'block-5.17-2022-01-21' of git://git.kernel.dk/linux-block: block: fix async_depth sysfs interface for mq-deadline block: Fix wrong offset in bio_truncate() block: assign bi_bdev for cloned bios in blk_rq_prep_clone block: cleanup q->srcu block: Remove unnecessary variable assignment brd: remove brd_devices_mutex mutex aoe: remove redundant assignment on variable n loop: remove redundant initialization of pointer node blk-mq: fix tag_get wait task can't be awakened