summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-01-24arm64: head: Clean the ID map and the HYP text to the PoC if neededArd Biesheuvel
If we enter with the MMU and caches enabled, the bootloader may not have performed any cache maintenance to the PoC. So clean the ID mapped page to the PoC, to ensure that instruction and data accesses with the MMU off see the correct data. For similar reasons, clean all the HYP text to the PoC as well when entering at EL2 with the MMU and caches enabled. Note that this means primary_entry() itself needs to be moved into the ID map as well, as we will return from init_kernel_el() with the MMU and caches off. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20230111102236.1430401-6-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24arm64: head: avoid cache invalidation when entering with the MMU onArd Biesheuvel
If we enter with the MMU on, there is no need for explicit cache invalidation for stores to memory, as they will be coherent with the caches. Let's take advantage of this, and create the ID map with the MMU still enabled if that is how we entered, and avoid any cache invalidation calls in that case. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20230111102236.1430401-5-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24arm64: head: record the MMU state at primary entryArd Biesheuvel
Prepare for being able to deal with primary entry with the MMU and caches enabled, by recording whether or not we entered with the MMU on in register x19 and in a global variable. (Note that setting this variable to '1' does not require cache invalidation, nor is it required for storing the bootargs in that case, so omit the cache maintenance). Since boot with the MMU and caches enabled is not permitted by the bare metal boot protocol, ensure that a diagnostic is emitted and a taint bit set if the MMU was found to be enabled on a non-EFI boot, and panic() once the console is likely to be up. We will make an exception for EFI boot later, which has strict requirements for the mapping of system memory, permitting us to relax the boot protocol and hand over from the EFI stub to the core kernel with MMU and caches left enabled. While at it, add 'pre_disable_mmu_workaround' macro invocations to init_kernel_el, as its manipulation of SCTLR_ELx may amount to disabling of the MMU after subsequent patches. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20230111102236.1430401-4-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24arm64: kernel: move identity map out of .text mappingArd Biesheuvel
Reorganize the ID map slightly so that only code that is executed with the MMU off or via the 1:1 mapping remains. This allows us to move the identity map out of the .text segment, as it will no longer need executable permissions via the kernel mapping. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20230111102236.1430401-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24arm64: head: Move all finalise_el2 calls to after __enable_mmuArd Biesheuvel
In the primary boot path, finalise_el2() is called much later than on the secondary boot or resume-from-suspend paths, and this does not appear to be intentional. Since we aim to do as little as possible before enabling the MMU and caches, align secondary and resume with primary boot, and defer the call to after the MMU is turned on. This also removes the need to clean finalise_el2() to the PoC once we enable support for booting with the MMU on. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20230111102236.1430401-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPSMark Rutland
This patch enables support for DYNAMIC_FTRACE_WITH_CALL_OPS on arm64. This allows each ftrace callsite to provide an ftrace_ops to the common ftrace trampoline, allowing each callsite to invoke distinct tracer functions without the need to fall back to list processing or to allocate custom trampolines for each callsite. This significantly speeds up cases where multiple distinct trace functions are used and callsites are mostly traced by a single tracer. The main idea is to place a pointer to the ftrace_ops as a literal at a fixed offset from the function entry point, which can be recovered by the common ftrace trampoline. Using a 64-bit literal avoids branch range limitations, and permits the ops to be swapped atomically without special considerations that apply to code-patching. In future this will also allow for the implementation of DYNAMIC_FTRACE_WITH_DIRECT_CALLS without branch range limitations by using additional fields in struct ftrace_ops. As noted in the core patch adding support for DYNAMIC_FTRACE_WITH_CALL_OPS, this approach allows for directly invoking ftrace_ops::func even for ftrace_ops which are dynamically-allocated (or part of a module), without going via ftrace_ops_list_func. Currently, this approach is not compatible with CLANG_CFI, as the presence/absence of pre-function NOPs changes the offset of the pre-function type hash, and there's no existing mechanism to ensure a consistent offset for instrumented and uninstrumented functions. When CLANG_CFI is enabled, the existing scheme with a global ops->func pointer is used, and there should be no functional change. I am currently working with others to allow the two to work together in future (though this will liekly require updated compiler support). I've benchamrked this with the ftrace_ops sample module [1], which is not currently upstream, but available at: https://lore.kernel.org/lkml/20230103124912.2948963-1-mark.rutland@arm.com git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git ftrace-ops-sample-20230109 Using that module I measured the total time taken for 100,000 calls to a trivial instrumented function, with a number of tracers enabled with relevant filters (which would apply to the instrumented function) and a number of tracers enabled with irrelevant filters (which would not apply to the instrumented function). I tested on an M1 MacBook Pro, running under a HVF-accelerated QEMU VM (i.e. on real hardware). Before this patch: Number of tracers || Total time | Per-call average time (ns) Relevant | Irrelevant || (ns) | Total | Overhead =========+============++=============+==============+============ 0 | 0 || 94,583 | 0.95 | - 0 | 1 || 93,709 | 0.94 | - 0 | 2 || 93,666 | 0.94 | - 0 | 10 || 93,709 | 0.94 | - 0 | 100 || 93,792 | 0.94 | - ---------+------------++-------------+--------------+------------ 1 | 1 || 6,467,833 | 64.68 | 63.73 1 | 2 || 7,509,708 | 75.10 | 74.15 1 | 10 || 23,786,792 | 237.87 | 236.92 1 | 100 || 106,432,500 | 1,064.43 | 1063.38 ---------+------------++-------------+--------------+------------ 1 | 0 || 1,431,875 | 14.32 | 13.37 2 | 0 || 6,456,334 | 64.56 | 63.62 10 | 0 || 22,717,000 | 227.17 | 226.22 100 | 0 || 103,293,667 | 1032.94 | 1031.99 ---------+------------++-------------+--------------+-------------- Note: per-call overhead is estimated relative to the baseline case with 0 relevant tracers and 0 irrelevant tracers. After this patch Number of tracers || Total time | Per-call average time (ns) Relevant | Irrelevant || (ns) | Total | Overhead =========+============++=============+==============+============ 0 | 0 || 94,541 | 0.95 | - 0 | 1 || 93,666 | 0.94 | - 0 | 2 || 93,709 | 0.94 | - 0 | 10 || 93,667 | 0.94 | - 0 | 100 || 93,792 | 0.94 | - ---------+------------++-------------+--------------+------------ 1 | 1 || 281,000 | 2.81 | 1.86 1 | 2 || 281,042 | 2.81 | 1.87 1 | 10 || 280,958 | 2.81 | 1.86 1 | 100 || 281,250 | 2.81 | 1.87 ---------+------------++-------------+--------------+------------ 1 | 0 || 280,959 | 2.81 | 1.86 2 | 0 || 6,502,708 | 65.03 | 64.08 10 | 0 || 18,681,209 | 186.81 | 185.87 100 | 0 || 103,550,458 | 1,035.50 | 1034.56 ---------+------------++-------------+--------------+------------ Note: per-call overhead is estimated relative to the baseline case with 0 relevant tracers and 0 irrelevant tracers. As can be seen from the above: a) Whenever there is a single relevant tracer function associated with a tracee, the overhead of invoking the tracer is constant, and does not scale with the number of tracers which are *not* associated with that tracee. b) The overhead for a single relevant tracer has dropped to ~1/7 of the overhead prior to this series (from 13.37ns to 1.86ns). This is largely due to permitting calls to dynamically-allocated ftrace_ops without going through ftrace_ops_list_func. I've run the ftrace selftests from v6.2-rc3, which reports: | # of passed: 110 | # of failed: 0 | # of unresolved: 3 | # of untested: 0 | # of unsupported: 0 | # of xfailed: 1 | # of undefined(test bug): 0 ... where the unresolved entries were the tests for DIRECT functions (which are not supported), and the checkbashisms selftest (which is irrelevant here): | [8] Test ftrace direct functions against tracers [UNRESOLVED] | [9] Test ftrace direct functions against kprobes [UNRESOLVED] | [62] Meta-selftest: Checkbashisms [UNRESOLVED] ... with all other tests passing (or failing as expected). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Florent Revest <revest@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230123134603.1064407-9-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24arm64: ftrace: Update stale commentMark Rutland
In commit: 26299b3f6ba26bfc ("ftrace: arm64: move from REGS to ARGS") ... we folded ftrace_regs_entry into ftrace_caller, and ftrace_regs_entry no longer exists. Update the comment accordingly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Florent Revest <revest@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230123134603.1064407-8-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24arm64: patching: Add aarch64_insn_write_literal_u64()Mark Rutland
In subsequent patches we'll need to atomically write to a naturally-aligned 64-bit literal embedded within the kernel text. Add a helper for this. For consistency with other text patching code we use copy_to_kernel_nofault(), which is atomic for naturally-aligned accesses up to 64-bits. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Florent Revest <revest@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230123134603.1064407-7-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24arm64: insn: Add helpers for BTIMark Rutland
In subsequent patches we'd like to check whether an instruction is a BTI. In preparation for this, add basic instruction helpers for BTI instructions. Per ARM DDI 0487H.a section C6.2.41, BTI is encoded in binary as follows, MSB to LSB: 1101 0101 000 0011 0010 0100 xx01 1111 Where the `xx` bits encode J/C/JC: 00 : (omitted) 01 : C 10 : J 11 : JC Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Florent Revest <revest@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230123134603.1064407-6-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24arm64: Extend support for CONFIG_FUNCTION_ALIGNMENTMark Rutland
On arm64 we don't align assembly function in the same way as C functions. This somewhat limits the utility of CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B for testing, and adds noise when testing that we're correctly aligning functions as will be necessary for ftrace in subsequent patches. Follow the example of x86, and align assembly functions in the same way as C functions. Selecting FUNCTION_ALIGNMENT_4B ensures CONFIG_FUCTION_ALIGNMENT will be a minimum of 4 bytes, matching the minimum alignment that __ALIGN and __ALIGN_STR provide prior to this patch. I've tested this by selecting CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B=y, building and booting a kernel, and looking for misaligned text symbols: Before, v6.2-rc3: # uname -rm 6.2.0-rc3 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 5009 Before, v6.2-rc3 + fixed __cold: # uname -rm 6.2.0-rc3-00001-g2a2bedf8bfa9 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 919 Before, v6.2-rc3 + fixed __cold + fixed ACPICA: # uname -rm 6.2.0-rc3-00002-g267bddc38572 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 323 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep acpi | wc -l 0 After: # uname -rm 6.2.0-rc3-00003-g71db61ee3ea1 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 112 Considering the remaining 112 unaligned text symbols: * 20 are non-function KVM NVHE assembly symbols, which are never instrumented by ftrace: # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep __kvm_nvhe | wc -l 20 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep __kvm_nvhe ffffbe6483f73784 t __kvm_nvhe___invalid ffffbe6483f73788 t __kvm_nvhe___do_hyp_init ffffbe6483f73ab0 t __kvm_nvhe_reset ffffbe6483f73b8c T __kvm_nvhe___hyp_idmap_text_end ffffbe6483f73b8c T __kvm_nvhe___hyp_text_start ffffbe6483f77864 t __kvm_nvhe___host_enter_restore_full ffffbe6483f77874 t __kvm_nvhe___host_enter_for_panic ffffbe6483f778a4 t __kvm_nvhe___host_enter_without_restoring ffffbe6483f81178 T __kvm_nvhe___guest_exit_panic ffffbe6483f811c8 T __kvm_nvhe___guest_exit ffffbe6483f81354 t __kvm_nvhe_abort_guest_exit_start ffffbe6483f81358 t __kvm_nvhe_abort_guest_exit_end ffffbe6483f81830 t __kvm_nvhe_wa_epilogue ffffbe6483f81844 t __kvm_nvhe_el1_trap ffffbe6483f81864 t __kvm_nvhe_el1_fiq ffffbe6483f81864 t __kvm_nvhe_el1_irq ffffbe6483f81884 t __kvm_nvhe_el1_error ffffbe6483f818a4 t __kvm_nvhe_el2_sync ffffbe6483f81920 t __kvm_nvhe_el2_error ffffbe6483f865c8 T __kvm_nvhe___start___kvm_ex_table * 53 are position-independent functions only used during early boot, which are built with '-Os', but are never instrumented by ftrace: # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep __pi | wc -l 53 We *could* drop '-Os' when building these for consistency, but that is not necessary to ensure that ftrace works correctly. * The remaining 39 are non-function symbols, and 3 runtime BPF functions, which are never instrumented by ftrace: # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep -v __kvm_nvhe | grep -v __pi | wc -l 39 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep -v __kvm_nvhe | grep -v __pi ffffbe6482e1009c T __irqentry_text_end ffffbe6482e10358 T __softirqentry_text_end ffffbe6482e1435c T __entry_text_end ffffbe6482e825f8 T __guest_exit_panic ffffbe6482e82648 T __guest_exit ffffbe6482e827d4 t abort_guest_exit_start ffffbe6482e827d8 t abort_guest_exit_end ffffbe6482e83030 t wa_epilogue ffffbe6482e83044 t el1_trap ffffbe6482e83064 t el1_fiq ffffbe6482e83064 t el1_irq ffffbe6482e83084 t el1_error ffffbe6482e830a4 t el2_sync ffffbe6482e83120 t el2_error ffffbe6482e93550 T sha256_block_neon ffffbe64830f3ae0 t e843419@01cc_00002a0c_3104 ffffbe648378bd90 t e843419@09b3_0000d7cb_bc4 ffffbe6483bdab20 t e843419@0c66_000116e2_34c8 ffffbe6483f62c94 T __noinstr_text_end ffffbe6483f70a18 T __sched_text_end ffffbe6483f70b2c T __cpuidle_text_end ffffbe6483f722d4 T __lock_text_end ffffbe6483f73b8c T __hyp_idmap_text_end ffffbe6483f73b8c T __hyp_text_start ffffbe6483f865c8 T __start___kvm_ex_table ffffbe6483f870d0 t init_el1 ffffbe6483f870f8 t init_el2 ffffbe6483f87324 t pen ffffbe6483f87b48 T __idmap_text_end ffffbe64848eb010 T __hibernate_exit_text_start ffffbe64848eb124 T __hibernate_exit_text_end ffffbe64848eb124 T __relocate_new_kernel_start ffffbe64848eb260 T __relocate_new_kernel_end ffffbe648498a8e8 T _einittext ffffbe648498a8e8 T __exittext_begin ffffbe6484999d84 T __exittext_end ffff8000080756b4 t bpf_prog_6deef7357e7b4530 [bpf] ffff80000808dd78 t bpf_prog_6deef7357e7b4530 [bpf] ffff80000809d684 t bpf_prog_6deef7357e7b4530 [bpf] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Florent Revest <revest@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230123134603.1064407-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24ACPI: Don't build ACPICA with '-Os'Mark Rutland
The ACPICA code has been built with '-Os' since the beginning of git history, though there's no explanatory comment as to why. This is unfortunate as GCC drops the alignment specificed by '-falign-functions=N' when '-Os' is used, as reported in GCC bug 88345: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88345 This prevents CONFIG_FUNCTION_ALIGNMENT and CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B from having their expected effect on the ACPICA code. This is doubly unfortunate as in subsequent patches arm64 will depend upon CONFIG_FUNCTION_ALIGNMENT for its ftrace implementation. Drop the '-Os' flag when building the ACPICA code. With this removed, the code builds cleanly and works correctly in testing so far. I've tested this by selecting CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B=y, building and booting a kernel using ACPI, and looking for misaligned text symbols: * arm64: Before, v6.2-rc3: # uname -rm 6.2.0-rc3 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 5009 Before, v6.2-rc3 + fixed __cold: # uname -rm 6.2.0-rc3-00001-g2a2bedf8bfa9 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 919 After: # uname -rm 6.2.0-rc3-00002-g267bddc38572 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 323 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep acpi | wc -l 0 * x86_64: Before, v6.2-rc3: # uname -rm 6.2.0-rc3 x86_64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 11537 Before, v6.2-rc3 + fixed __cold: # uname -rm 6.2.0-rc3-00001-g2a2bedf8bfa9 x86_64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 2805 After: # uname -rm 6.2.0-rc3-00002-g267bddc38572 x86_64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 1357 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep acpi | wc -l 0 With the patch applied, the remaining unaligned text labels are a combination of static call trampolines and labels in assembly, which can be dealt with in subsequent patches. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Florent Revest <revest@chromium.org> Cc: Len Brown <lenb@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Moore <robert.moore@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will@kernel.org> Cc: linux-acpi@vger.kernel.org Link: https://lore.kernel.org/r/20230123134603.1064407-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24Compiler attributes: GCC cold function alignment workaroundsMark Rutland
Contemporary versions of GCC (e.g. GCC 12.2.0) drop the alignment specified by '-falign-functions=N' for functions marked with the __cold__ attribute, and potentially for callees of __cold__ functions as these may be implicitly marked as __cold__ by the compiler. LLVM appears to respect '-falign-functions=N' in such cases. This has been reported to GCC in bug 88345: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88345 ... which also covers alignment being dropped when '-Os' is used, which will be addressed in a separate patch. Currently, use of '-falign-functions=N' is limited to CONFIG_FUNCTION_ALIGNMENT, which is largely used for performance and/or analysis reasons (e.g. with CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B), but isn't necessary for correct functionality. However, this dropped alignment isn't great for the performance and/or analysis cases. Subsequent patches will use CONFIG_FUNCTION_ALIGNMENT as part of arm64's ftrace implementation, which will require all instrumented functions to be aligned to at least 8-bytes. This patch works around the dropped alignment by avoiding the use of the __cold__ attribute when CONFIG_FUNCTION_ALIGNMENT is non-zero, and by specifically aligning abort(), which GCC implicitly marks as __cold__. As the __cold macro is now dependent upon config options (which is against the policy described at the top of compiler_attributes.h), it is moved into compiler_types.h. I've tested this by building and booting a kernel configured with defconfig + CONFIG_EXPERT=y + CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B=y, and looking for misaligned text symbols in /proc/kallsyms: * arm64: Before: # uname -rm 6.2.0-rc3 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 5009 After: # uname -rm 6.2.0-rc3-00001-g2a2bedf8bfa9 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 919 * x86_64: Before: # uname -rm 6.2.0-rc3 x86_64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 11537 After: # uname -rm 6.2.0-rc3-00001-g2a2bedf8bfa9 x86_64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 2805 There's clearly a substantial reduction in the number of misaligned symbols. From manual inspection, the remaining unaligned text labels are a combination of ACPICA functions (due to the use of '-Os'), static call trampolines, and non-function labels in assembly, which will be dealt with in subsequent patches. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Florent Revest <revest@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will@kernel.org> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20230123134603.1064407-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-24ftrace: Add DYNAMIC_FTRACE_WITH_CALL_OPSMark Rutland
Architectures without dynamic ftrace trampolines incur an overhead when multiple ftrace_ops are enabled with distinct filters. in these cases, each call site calls a common trampoline which uses ftrace_ops_list_func() to iterate over all enabled ftrace functions, and so incurs an overhead relative to the size of this list (including RCU protection overhead). Architectures with dynamic ftrace trampolines avoid this overhead for call sites which have a single associated ftrace_ops. In these cases, the dynamic trampoline is customized to branch directly to the relevant ftrace function, avoiding the list overhead. On some architectures it's impractical and/or undesirable to implement dynamic ftrace trampolines. For example, arm64 has limited branch ranges and cannot always directly branch from a call site to an arbitrary address (e.g. from a kernel text address to an arbitrary module address). Calls from modules to core kernel text can be indirected via PLTs (allocated at module load time) to address this, but the same is not possible from calls from core kernel text. Using an indirect branch from a call site to an arbitrary trampoline is possible, but requires several more instructions in the function prologue (or immediately before it), and/or comes with far more complex requirements for patching. Instead, this patch adds a new option, where an architecture can associate each call site with a pointer to an ftrace_ops, placed at a fixed offset from the call site. A shared trampoline can recover this pointer and call ftrace_ops::func() without needing to go via ftrace_ops_list_func(), avoiding the associated overhead. This avoids issues with branch range limitations, and avoids the need to allocate and manipulate dynamic trampolines, making it far simpler to implement and maintain, while having similar performance characteristics. Note that this allows for dynamic ftrace_ops to be invoked directly from an architecture's ftrace_caller trampoline, whereas existing code forces the use of ftrace_ops_get_list_func(), which is in part necessary to permit the ftrace_ops to be freed once unregistered *and* to avoid branch/address-generation range limitation on some architectures (e.g. where ops->func is a module address, and may be outside of the direct branch range for callsites within the main kernel image). The CALL_OPS approach avoids this problems and is safe as: * The existing synchronization in ftrace_shutdown() using ftrace_shutdown() using synchronize_rcu_tasks_rude() (and synchronize_rcu_tasks()) ensures that no tasks hold a stale reference to an ftrace_ops (e.g. in the middle of the ftrace_caller trampoline, or while invoking ftrace_ops::func), when that ftrace_ops is unregistered. Arguably this could also be relied upon for the existing scheme, permitting dynamic ftrace_ops to be invoked directly when ops->func is in range, but this will require additional logic to handle branch range limitations, and is not handled by this patch. * Each callsite's ftrace_ops pointer literal can hold any valid kernel address, and is updated atomically. As an architecture's ftrace_caller trampoline will atomically load the ops pointer then dereference ops->func, there is no risk of invoking ops->func with a mismatches ops pointer, and updates to the ops pointer do not require special care. A subsequent patch will implement architectures support for arm64. There should be no functional change as a result of this patch alone. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Cc: Florent Revest <revest@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230123134603.1064407-2-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Correct buffer size for SME ZA storageZenghui Yu
It looks like a copy-paste error to describe the ZA buffer size using (the number of P registers * the maximum size of a Z register). This doesn't have practical impact though as we're always allocating enough space even for the architectural maximum ZA storage, with SVL equals to 2048 bits. Switch to use ZA_SIG_REGS_SIZE(SVE_VQ_MAX). setup_za() will need to initialize two 64MB arraies with this change and can be optimized later (if someone complain). Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221218092942.1940-2-yuzenghui@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Remove the local NUM_VL definitionZenghui Yu
It was introduced in commit b77e995e3b96 ("kselftest/arm64: Add a test program to exercise the syscall ABI") but never actually used. Remove it. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221218092942.1940-1-yuzenghui@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: cpufeature: Use kstrtobool() instead of strtobool()Christophe JAILLET
strtobool() is the same as kstrtobool(). However, the latter is more used within the kernel. In order to remove strtobool() and slightly simplify kstrtox.h, switch to the other function name. While at it, include the corresponding header file (<linux/kstrtox.h>) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/5a1b329cda34aec67615c0d2fd326eb0d6634bf7.1667336095.git.christophe.jaillet@wanadoo.fr Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Verify simultaneous SSVE and ZA context generationMark Brown
Add a test that generates SSVE and ZA context in a single signal frame to ensure that nothing is going wrong in that case for any reason. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230117-arm64-test-ssve-za-v1-2-203c00150154@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Verify that SSVE signal context has SVE_SIG_FLAG_SM setMark Brown
Streaming mode SVE signal context should have SVE_SIG_FLAG_SM set but we were not actually validating this. Add a check. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230117-arm64-test-ssve-za-v1-1-203c00150154@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Remove spurious comment from MTE test MakefileMark Brown
There's a stray comment in the MTE test Makefile which documents something that's since been removed, delete it. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20230111-arm64-kselftest-clang-v1-6-89c69d377727@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Support build of MTE tests with clangMark Brown
The assembly portions of the MTE selftests need to be built with a toolchain supporting MTE. Since we support GCC versions that lack MTE support we have logic to suppress build of these tests when using such a toolchain but that logic is broken for LLVM=1 builds, it uses CC but CC is only set for LLVM builds in libs.mk which needs to be included after we have selected which test programs to build. Since all supported LLVM versions support MTE we can simply assume MTE support when LLVM is set. This is not a thing of beauty but it does the job. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20230111-arm64-kselftest-clang-v1-5-89c69d377727@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Initialise current at build time in signal testsMark Brown
When building with clang the toolchain refuses to link the signals testcases since the assembly code has a reference to current which has no initialiser so is placed in the BSS: /tmp/signals-af2042.o: in function `fake_sigreturn': <unknown>:51:(.text+0x40): relocation truncated to fit: R_AARCH64_LD_PREL_LO19 against symbol `current' defined in .bss section in /tmp/test_signals-ec1160.o Since the first statement in main() initialises current we may as well fix this by moving the initialisation to build time so the variable doesn't end up in the BSS. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20230111-arm64-kselftest-clang-v1-4-89c69d377727@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Don't pass headers to the compiler as sourceMark Brown
The signal Makefile rules pass all the dependencies for each executable, including headers, to the compiler which GCC is happy enough with but clang rejects: clang --target=aarch64-none-linux-gnu -fintegrated-as -Wall -O2 -g -I/home/broonie/git/linux/tools/testing/selftests/ -isystem /home/broonie/git/linux/usr/include -D_GNU_SOURCE -std=gnu99 -I. test_signals.c test_signals_utils.c testcases/testcases.c signals.S testcases/fake_sigreturn_bad_magic.c test_signals.h test_signals_utils.h testcases/testcases.h -o testcases/fake_sigreturn_bad_magic clang: error: cannot specify -o when generating multiple output files This happens because clang gets confused about what to do with the header files, failing to identify them as source. This is not amazing behaviour on clang's part and should ideally be fixed but even if that happens we'd still need a new clang release so let's instead rework the Makefile so we use variables for the lists of header and source files, allowing us to only pass the source files to the compiler and keep clang happy. As a bonus the resulting Makefile is a bit easier to read. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20230111-arm64-kselftest-clang-v1-3-89c69d377727@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Remove redundant _start labels from FP testsMark Brown
There are a number of freestanding static executables used in floating point testing that have no runtime at all. These all define the main entry point as: .globl _start function _start _start: but clang's integrated assembler complains that: error: symbol '_start' is already defined due to having both a label and function directive. Remove the label to allow building with clang. No functional change. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20230111-arm64-kselftest-clang-v1-2-89c69d377727@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Fix .pushsection for strings in FP testsMark Brown
The .pushsection directive used to store the strings used with the .puts macro in the floating point helpers does not provide a section type but according to the gas documentation this should be mandatory and with the clang built in as it actually is. Provide one so that we can build these tests with LLVM=1. No functional change. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20230111-arm64-kselftest-clang-v1-1-89c69d377727@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap SSBSAmit Daniel Kachhap
This hwcap was added for 32-bit native arm kernel by commit fea53546be57 ("ARM: 9274/1: Add hwcap for Speculative Store Bypassing Safe") and hence the corresponding changes added in 32-bit compat arm64 for similar user interfaces. Speculative Store Bypass Safe is a feature(FEAT_SSBS) present in AArch32/AArch64 state for Armv8 and can be identified by PFR2.SSBS identification register. This hwcap is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-8-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap SBAmit Daniel Kachhap
This hwcap was added for 32-bit native arm kernel by commit 3bda6d884897 ("ARM: 9273/1: Add hwcap for Speculation Barrier(SB)") and hence the corresponding changes added in 32-bit compat arm64 kernel. Speculation Barrier is a feature(FEAT_SB) present in both AArch32 and AArch64 state. This hwcap is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-7-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap I8MMAmit Daniel Kachhap
This hwcap was added earlier for 32-bit native arm kernel by commit 956ca3a4eb81 ("ARM: 9272/1: vfp: Add hwcap for FEAT_AA32I8MM") and hence the corresponding changes added in 32-bit compat arm64 kernel for similar user interfaces. Int8 matrix multiplication is a feature (FEAT_AA32I8MM) present in AArch32 state of Armv8 and is identified by ISAR6.I8MM register. Similar feature(FEAT_I8MM) exist for AArch64 state and is already advertised in arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-6-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap ASIMDBF16Amit Daniel Kachhap
This hwcap was added earlier for 32-bit native arm kernel by commit 23b6d4ad6e7a ("ARM: 9271/1: vfp: Add hwcap for FEAT_AA32BF16") and hence the corresponding changes added in 32-bit compat arm64 kernel. Brain 16-bit floating-point storage format is a feature (FEAT_AA32BF16) present in AArch32 state for Armv8 and is represented by ISAR6.BF16 identification register. Similar feature (FEAT_BF16) exist for AArch64 state and is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-5-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap ASIMDFHMAmit Daniel Kachhap
This hwcap was added earlier for 32-bit native arm kernel by commit ce4835497c20 ("ARM: 9270/1: vfp: Add hwcap for FEAT_FHM") and hence the corresponding changes added in 32-bit compat arm64 kernel for similar user interfaces. Floating-point half-precision multiplication (FHM) is a feature present in AArch32/AArch64 state for Armv8. This hwcap is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-4-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap ASIMDDPAmit Daniel Kachhap
This hwcap was added earlier for 32-bit native arm kernel by commit 62ea0d873af3 ("ARM: 9269/1: vfp: Add hwcap for FEAT_DotProd") and hence the corresponding changes added in 32-bit compat arm64 kernel for similar user interfaces. Advanced Dot product is a feature (FEAT_DotProd) present in both AArch32/AArch64 state for Armv8 and is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-3-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap FPHP and ASIMDHPAmit Daniel Kachhap
These hwcaps were added earlier for 32-bit native arm kernel by commit c00a19c8b143 ("ARM: 9268/1: vfp: Add hwcap FPHP and ASIMDHP for FEAT_FP16") and hence the corresponding changes added in 32-bit compat arm64 kernel for similar userspace interfaces. Floating point half-precision (FPHP) and Advanced SIMD half-precision (ASIMDHP) represents the Armv8 FP16 feature extension and is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-2-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Stash shadow stack pointer in the task struct on interruptArd Biesheuvel
Instead of reloading the shadow call stack pointer from the ordinary stack, which may be vulnerable to the kind of gadget based attacks shadow call stacks were designed to prevent, let's store a task's shadow call stack pointer in the task struct when switching to the shadow IRQ stack. Given that currently, the task_struct::scs_sp field is only used to preserve the shadow call stack pointer while a task is scheduled out or running in user space, reusing this field to preserve and restore it while running off the IRQ stack must be safe, as those occurrences are guaranteed to never overlap. (The stack switching logic only switches stacks when running from the task stack, and so the value being saved here always corresponds to the task mode shadow stack) While at it, fold a mov/add/mov sequence into a single add. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20230109174800.3286265-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Always load shadow stack pointer directly from the task structArd Biesheuvel
All occurrences of the scs_load macro load the value of the shadow call stack pointer from the task which is current at that point. So instead of taking a task struct register argument in the scs_load macro to specify the task struct to load from, let's always reference the current task directly. This should make it much harder to exploit any instruction sequences reloading the shadow call stack pointer register from memory. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230109174800.3286265-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Avoid repeated AA64MMFR1_EL1 register read on pagefault pathGabriel Krisman Bertazi
Accessing AA64MMFR1_EL1 is expensive in KVM guests, since it is emulated in the hypervisor. In fact, ARM documentation mentions some feature registers are not supposed to be accessed frequently by the OS, and therefore should be emulated for guests [1]. Commit 0388f9c74330 ("arm64: mm: Implement arch_wants_old_prefaulted_pte()") introduced a read of this register in the page fault path. But, even when the feature of setting faultaround pages with the old flag is disabled for a given cpu, we are still paying the cost of checking the register on every pagefault. This results in an explosion of vmexit events in KVM guests, which directly impacts the performance of virtualized workloads. For instance, running kernbench yields a 15% increase in system time solely due to the increased vmexit cycles. This patch avoids the extra cost by using the sanitized cached value. It should be safe to do so, since this register mustn't change for a given cpu. [1] https://developer.arm.com/-/media/Arm%20Developer%20Community/PDF/Learn%20the%20Architecture/Armv8-A%20virtualization.pdf?revision=a765a7df-1a00-434d-b241-357bfda2dd31 Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20230109151955.8292-1-krisman@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Add test case for TPIDR2 signal frame recordsMark Brown
Ensure that we get signal context for TPIDR2 if and only if SME is present on the system. Since TPIDR2 is owned by libc we merely validate that the value is whatever it was set to, this isn't ideal since it's likely to just be the default of 0 with current systems but it avoids future false positives. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-tpidr2-sig-v3-4-c77c6c8775f4@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Add TPIDR2 to the set of known signal context recordsMark Brown
When validating the set of signal context records check that any TPIDR2 record has the correct size, also suppressing warnings due to seeing an unknown record type. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-tpidr2-sig-v3-3-c77c6c8775f4@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/signal: Include TPIDR2 in the signal contextMark Brown
Add a new signal frame record for TPIDR2 using the same format as we already use for ESR with different magic, a header with the value from the register appended as the only data. If SME is supported then this record is always included. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com> Link: https://lore.kernel.org/r/20221208-arm64-tpidr2-sig-v3-2-c77c6c8775f4@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Document ABI for TPIDR2 signal informationMark Brown
In order to allow access to TPIDR2 from signal handlers we need to add it to the signal context, document that we are doing so. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-tpidr2-sig-v3-1-c77c6c8775f4@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Add coverage of SME 2 and 2.1 hwcapsMark Brown
Add the hwcaps defined by SME 2 and 2.1 to the hwcaps test. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-21-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Add coverage of the ZT ptrace regsetMark Brown
Add coverage of the ZT ptrace interface. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-20-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Add SME2 coverage to syscall-abiMark Brown
Verify that ZT0 is preserved over syscalls when it is present and PSTATE.ZA is set. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-19-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Add test coverage for ZT register signal framesMark Brown
We should have a ZT register frame with an expected size when ZA is enabled and have no ZT frame when ZA is disabled. Since we don't load any data into ZT we expect the data to all be zeros since the architecture guarantees it will be set to 0 as ZA is enabled. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-18-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Teach the generic signal context validation about ZTMark Brown
Add ZT to the set of signal contexts that the shared code understands and validates the form of. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-17-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Enumerate SME2 in the signal test utility codeMark Brown
Support test cases for SME2 by adding it to the set of features that we enumerate so test cases can check for it. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-16-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Cover ZT in the FP stress testMark Brown
Hook up the newly added zt-test program in the FPSIMD stress tests, start a copy per CPU when SME2 is supported. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-15-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20kselftest/arm64: Add a stress test program for ZT0Mark Brown
Following the pattern for the other register sets add a stress test program for ZT0 which continually loads and verifies patterns in the register in an effort to discover context switching problems. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-14-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Add hwcaps for SME 2 and 2.1 featuresMark Brown
In order to allow userspace to discover the presence of the new SME features add hwcaps for them. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-13-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Implement ZT0 ptrace supportMark Brown
Implement support for a new note type NT_ARM64_ZT providing access to ZT0 when implemented. Since ZT0 is a register with constant size this is much simpler than for other SME state. As ZT0 is only accessible when PSTATE.ZA is set writes to ZT0 cause PSTATE.ZA to be set, the main alternative would be to return -EBUSY in this case but this seemed more constructive. Practical users are also going to be working with ZA anyway and have some understanding of the state. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-12-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Implement signal handling for ZTMark Brown
Add a new signal context type for ZT which is present in the signal frame when ZA is enabled and ZT is supported by the system. In order to account for the possible addition of further ZT registers in the future we make the number of registers variable in the ABI, though currently the only possible number is 1. We could just use a bare list head for the context since the number of registers can be inferred from the size of the context but for usability and future extensibility we define a header with the number of registers and some reserved fields in it. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-11-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Implement context switching for ZT0Mark Brown
When the system supports SME2 the ZT0 register must be context switched as part of the floating point state. This register is stored immediately after ZA in memory and is only accessible when PSTATE.ZA is set so we handle it in the same functions we use to save and restore ZA. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-10-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>