summaryrefslogtreecommitdiff
path: root/arch/arm64/include
AgeCommit message (Collapse)Author
2023-01-23Merge tag 'efi-fixes-for-v6.2-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi Pull EFI fixes from Ard Biesheuvel: "Another couple of EFI fixes, of which the first two were already in -next when I sent out the previous PR, but they caused some issues on non-EFI boots so I let them simmer for a bit longer. - ensure the EFI ResetSystem and ACPI PRM calls are recognized as users of the EFI runtime, and therefore protected against exceptions - account for the EFI runtime stack in the stacktrace code - remove Matthew Garrett's MAINTAINERS entry for efivarfs" * tag 'efi-fixes-for-v6.2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi: efi: Remove Matthew Garrett as efivarfs maintainer arm64: efi: Account for the EFI runtime stack in stack unwinder arm64: efi: Avoid workqueue to check whether EFI runtime is live
2023-01-21KVM: arm64: Normalize cache configurationAkihiko Odaki
Before this change, the cache configuration of the physical CPU was exposed to vcpus. This is problematic because the cache configuration a vcpu sees varies when it migrates between vcpus with different cache configurations. Fabricate cache configuration from the sanitized value, which holds the CTR_EL0 value the userspace sees regardless of which physical CPU it resides on. CLIDR_EL1 and CCSIDR_EL1 are now writable from the userspace so that the VMM can restore the values saved with the old kernel. Suggested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Link: https://lore.kernel.org/r/20230112023852.42012-8-akihiko.odaki@daynix.com [ Oliver: Squash Marc's fix for CCSIDR_EL1.LineSize when set from userspace ] Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2023-01-20arm64: Add compat hwcap SSBSAmit Daniel Kachhap
This hwcap was added for 32-bit native arm kernel by commit fea53546be57 ("ARM: 9274/1: Add hwcap for Speculative Store Bypassing Safe") and hence the corresponding changes added in 32-bit compat arm64 for similar user interfaces. Speculative Store Bypass Safe is a feature(FEAT_SSBS) present in AArch32/AArch64 state for Armv8 and can be identified by PFR2.SSBS identification register. This hwcap is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-8-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap SBAmit Daniel Kachhap
This hwcap was added for 32-bit native arm kernel by commit 3bda6d884897 ("ARM: 9273/1: Add hwcap for Speculation Barrier(SB)") and hence the corresponding changes added in 32-bit compat arm64 kernel. Speculation Barrier is a feature(FEAT_SB) present in both AArch32 and AArch64 state. This hwcap is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-7-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap I8MMAmit Daniel Kachhap
This hwcap was added earlier for 32-bit native arm kernel by commit 956ca3a4eb81 ("ARM: 9272/1: vfp: Add hwcap for FEAT_AA32I8MM") and hence the corresponding changes added in 32-bit compat arm64 kernel for similar user interfaces. Int8 matrix multiplication is a feature (FEAT_AA32I8MM) present in AArch32 state of Armv8 and is identified by ISAR6.I8MM register. Similar feature(FEAT_I8MM) exist for AArch64 state and is already advertised in arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-6-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap ASIMDBF16Amit Daniel Kachhap
This hwcap was added earlier for 32-bit native arm kernel by commit 23b6d4ad6e7a ("ARM: 9271/1: vfp: Add hwcap for FEAT_AA32BF16") and hence the corresponding changes added in 32-bit compat arm64 kernel. Brain 16-bit floating-point storage format is a feature (FEAT_AA32BF16) present in AArch32 state for Armv8 and is represented by ISAR6.BF16 identification register. Similar feature (FEAT_BF16) exist for AArch64 state and is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-5-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap ASIMDFHMAmit Daniel Kachhap
This hwcap was added earlier for 32-bit native arm kernel by commit ce4835497c20 ("ARM: 9270/1: vfp: Add hwcap for FEAT_FHM") and hence the corresponding changes added in 32-bit compat arm64 kernel for similar user interfaces. Floating-point half-precision multiplication (FHM) is a feature present in AArch32/AArch64 state for Armv8. This hwcap is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-4-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap ASIMDDPAmit Daniel Kachhap
This hwcap was added earlier for 32-bit native arm kernel by commit 62ea0d873af3 ("ARM: 9269/1: vfp: Add hwcap for FEAT_DotProd") and hence the corresponding changes added in 32-bit compat arm64 kernel for similar user interfaces. Advanced Dot product is a feature (FEAT_DotProd) present in both AArch32/AArch64 state for Armv8 and is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-3-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Add compat hwcap FPHP and ASIMDHPAmit Daniel Kachhap
These hwcaps were added earlier for 32-bit native arm kernel by commit c00a19c8b143 ("ARM: 9268/1: vfp: Add hwcap FPHP and ASIMDHP for FEAT_FP16") and hence the corresponding changes added in 32-bit compat arm64 kernel for similar userspace interfaces. Floating point half-precision (FPHP) and Advanced SIMD half-precision (ASIMDHP) represents the Armv8 FP16 feature extension and is already advertised in native arm64 kernel. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230111053706.13994-2-amit.kachhap@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Always load shadow stack pointer directly from the task structArd Biesheuvel
All occurrences of the scs_load macro load the value of the shadow call stack pointer from the task which is current at that point. So instead of taking a task struct register argument in the scs_load macro to specify the task struct to load from, let's always reference the current task directly. This should make it much harder to exploit any instruction sequences reloading the shadow call stack pointer register from memory. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230109174800.3286265-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64: Avoid repeated AA64MMFR1_EL1 register read on pagefault pathGabriel Krisman Bertazi
Accessing AA64MMFR1_EL1 is expensive in KVM guests, since it is emulated in the hypervisor. In fact, ARM documentation mentions some feature registers are not supposed to be accessed frequently by the OS, and therefore should be emulated for guests [1]. Commit 0388f9c74330 ("arm64: mm: Implement arch_wants_old_prefaulted_pte()") introduced a read of this register in the page fault path. But, even when the feature of setting faultaround pages with the old flag is disabled for a given cpu, we are still paying the cost of checking the register on every pagefault. This results in an explosion of vmexit events in KVM guests, which directly impacts the performance of virtualized workloads. For instance, running kernbench yields a 15% increase in system time solely due to the increased vmexit cycles. This patch avoids the extra cost by using the sanitized cached value. It should be safe to do so, since this register mustn't change for a given cpu. [1] https://developer.arm.com/-/media/Arm%20Developer%20Community/PDF/Learn%20the%20Architecture/Armv8-A%20virtualization.pdf?revision=a765a7df-1a00-434d-b241-357bfda2dd31 Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20230109151955.8292-1-krisman@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/signal: Include TPIDR2 in the signal contextMark Brown
Add a new signal frame record for TPIDR2 using the same format as we already use for ESR with different magic, a header with the value from the register appended as the only data. If SME is supported then this record is always included. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com> Link: https://lore.kernel.org/r/20221208-arm64-tpidr2-sig-v3-2-c77c6c8775f4@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Add hwcaps for SME 2 and 2.1 featuresMark Brown
In order to allow userspace to discover the presence of the new SME features add hwcaps for them. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-13-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Implement signal handling for ZTMark Brown
Add a new signal context type for ZT which is present in the signal frame when ZA is enabled and ZT is supported by the system. In order to account for the possible addition of further ZT registers in the future we make the number of registers variable in the ABI, though currently the only possible number is 1. We could just use a bare list head for the context since the number of registers can be inferred from the size of the context but for usability and future extensibility we define a header with the number of registers and some reserved fields in it. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-11-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Implement context switching for ZT0Mark Brown
When the system supports SME2 the ZT0 register must be context switched as part of the floating point state. This register is stored immediately after ZA in memory and is only accessible when PSTATE.ZA is set so we handle it in the same functions we use to save and restore ZA. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-10-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Provide storage for ZT0Mark Brown
When the system supports SME2 there is an additional register ZT0 which we must store when the task is using SME. Since ZT0 is accessible only when PSTATE.ZA is set just like ZA we allocate storage for it along with ZA, increasing the allocation size for the memory region where we store ZA and storing the data for ZT after that for ZA. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-9-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Add basic enumeration for SME2Mark Brown
Add basic feature detection for SME2, detecting that the feature is present and disabling traps for ZT0. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-8-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Manually encode ZT0 load and store instructionsMark Brown
In order to avoid unrealistic toolchain requirements we manually encode the instructions for loading and storing ZT0. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-6-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/esr: Document ISS for ZT0 being disabledMark Brown
SME2 defines a new ISS code for use when trapping acesses to ZT0, add a definition for it. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-5-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-20arm64/sme: Rename za_state to sme_stateMark Brown
In preparation for adding support for storage for ZT0 to the thread_struct rename za_state to sme_state. Since ZT0 is accessible when PSTATE.ZA is set just like ZA itself we will extend the allocation done for ZA to cover it, avoiding the need to further expand task_struct for non-SME tasks. No functional changes. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-1-f2fa0aef982f@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-19KVM: MMU: Make the definition of 'INVALID_GPA' commonYu Zhang
KVM already has a 'GPA_INVALID' defined as (~(gpa_t)0) in kvm_types.h, and it is used by ARM code. We do not need another definition of 'INVALID_GPA' for X86 specifically. Instead of using the common 'GPA_INVALID' for X86, replace it with 'INVALID_GPA', and change the users of 'GPA_INVALID' so that the diff can be smaller. Also because the name 'INVALID_GPA' tells the user we are using an invalid GPA, while the name 'GPA_INVALID' is emphasizing the GPA is an invalid one. No functional change intended. Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com> Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20230105130127.866171-1-yu.c.zhang@linux.intel.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2023-01-19perf: arm_spe: Support new SPEv1.2/v8.7 'not taken' eventRob Herring
Arm SPEv1.2 (Armv8.7/v9.2) adds a new event, 'not taken', in bit 6 of the PMSEVFR_EL1 register. Update arm_spe_pmsevfr_res0() to support the additional event. Tested-by: James Clark <james.clark@arm.com> Signed-off-by: Rob Herring <robh@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20220825-arm-spe-v8-7-v4-6-327f860daf28@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2023-01-19arm64/sysreg: Convert SPE registers to automatic generationRob Herring
Convert all the SPE register defines to automatic generation. No functional changes. New registers and fields for SPEv1.2 are added with the conversion. Some of the PMBSR MSS field defines are kept as the automatic generation has no way to create multiple names for the same register bits. The meaning of the MSS field depends on other bits. Tested-by: James Clark <james.clark@arm.com> Signed-off-by: Rob Herring <robh@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20220825-arm-spe-v8-7-v4-3-327f860daf28@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2023-01-19arm64: Drop SYS_ from SPE register definesRob Herring
We currently have a non-standard SYS_ prefix in the constants generated for the SPE register bitfields. Drop this in preparation for automatic register definition generation. The SPE mask defines were unshifted, and the SPE register field enumerations were shifted. The autogenerated defines are the opposite, so make the necessary adjustments. No functional changes. Tested-by: James Clark <james.clark@arm.com> Signed-off-by: Rob Herring <robh@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20220825-arm-spe-v8-7-v4-2-327f860daf28@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2023-01-19perf: arm_spe: Use feature numbering for PMSEVFR_EL1 definesRob Herring
Similar to commit 121a8fc088f1 ("arm64/sysreg: Use feature numbering for PMU and SPE revisions") use feature numbering instead of architecture versions for the PMSEVFR_EL1 Res0 defines. Tested-by: James Clark <james.clark@arm.com> Signed-off-by: Rob Herring <robh@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20220825-arm-spe-v8-7-v4-1-327f860daf28@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2023-01-19serial: earlycon-arm-semihost: Move smh_putc() variants in respective arch's ↵Bin Meng
semihost.h Move smh_putc() variants in respective arch/*/include/asm/semihost.h, in preparation to add RISC-V support. Signed-off-by: Bin Meng <bmeng@tinylab.org> Tested-by: Sergey Matyukevich <sergey.matyukevich@syntacore.com> Acked-by: Palmer Dabbelt <palmer@rivosinc.com> Link: https://lore.kernel.org/r/20221209150437.795918-2-bmeng@tinylab.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-01-16arm64: efi: Account for the EFI runtime stack in stack unwinderArd Biesheuvel
The EFI runtime services run from a dedicated stack now, and so the stack unwinder needs to be informed about this. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-01-16arm64: efi: Avoid workqueue to check whether EFI runtime is liveArd Biesheuvel
Comparing current_work() against efi_rts_work.work is sufficient to decide whether current is currently running EFI runtime services code at any level in its call stack. However, there are other potential users of the EFI runtime stack, such as the ACPI subsystem, which may invoke efi_call_virt_pointer() directly, and so any sync exceptions occurring in firmware during those calls are currently misidentified. So instead, let's check whether the stashed value of the thread stack pointer points into current's thread stack. This can only be the case if current was interrupted while running EFI runtime code. Note that this implies that we should clear the stashed value after switching back, to avoid false positives. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-01-13Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "ARM: - Fix the PMCR_EL0 reset value after the PMU rework - Correctly handle S2 fault triggered by a S1 page table walk by not always classifying it as a write, as this breaks on R/O memslots - Document why we cannot exit with KVM_EXIT_MMIO when taking a write fault from a S1 PTW on a R/O memslot - Put the Apple M2 on the naughty list for not being able to correctly implement the vgic SEIS feature, just like the M1 before it - Reviewer updates: Alex is stepping down, replaced by Zenghui x86: - Fix various rare locking issues in Xen emulation and teach lockdep to detect them - Documentation improvements - Do not return host topology information from KVM_GET_SUPPORTED_CPUID" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: x86/xen: Avoid deadlock by adding kvm->arch.xen.xen_lock leaf node lock KVM: Ensure lockdep knows about kvm->lock vs. vcpu->mutex ordering rule KVM: x86/xen: Fix potential deadlock in kvm_xen_update_runstate_guest() KVM: x86/xen: Fix lockdep warning on "recursive" gpc locking Documentation: kvm: fix SRCU locking order docs KVM: x86: Do not return host topology information from KVM_GET_SUPPORTED_CPUID KVM: nSVM: clarify recalc_intercepts() wrt CR8 MAINTAINERS: Remove myself as a KVM/arm64 reviewer MAINTAINERS: Add Zenghui Yu as a KVM/arm64 reviewer KVM: arm64: vgic: Add Apple M2 cpus to the list of broken SEIS implementations KVM: arm64: Convert FSC_* over to ESR_ELx_FSC_* KVM: arm64: Document the behaviour of S1PTW faults on RO memslots KVM: arm64: Fix S1PTW handling on RO memslots KVM: arm64: PMU: Fix PMCR_EL0 reset value
2023-01-12KVM: arm64: Kill CPACR_EL1_TTA definitionMarc Zyngier
Since the One True Way is to use the new generated definition, kill the KVM-specific definition of CPACR_EL1_TTA, and move over to CPACR_ELx_TTA, hopefully for the same result. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230112154803.1808559-1-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2023-01-12KVM: arm64: Ignore EAGAIN for walks outside of a faultOliver Upton
The page table walkers are invoked outside fault handling paths, such as write protecting a range of memory. EAGAIN is generally used by the walkers to retry execution due to races on a particular PTE, like taking an access fault on a PTE being invalidated from another thread. This early return behavior is undesirable for walkers that operate outside a fault handler. Suppress EAGAIN and continue the walk if operating outside a fault handler. Link: https://lore.kernel.org/r/20221202185156.696189-3-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2023-01-12KVM: arm64: Use KVM's pte type/helpers in handle_access_fault()Oliver Upton
Consistently use KVM's own pte types and helpers in handle_access_fault(). No functional change intended. Link: https://lore.kernel.org/r/20221202185156.696189-2-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2023-01-12KVM: arm64: Always set HCR_TID2Akihiko Odaki
Always set HCR_TID2 to trap CTR_EL0, CCSIDR2_EL1, CLIDR_EL1, and CSSELR_EL1. This saves a few lines of code and allows to employ their access trap handlers for more purposes anticipated by the old condition for setting HCR_TID2. Suggested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Reiji Watanabe <reijiw@google.com> Link: https://lore.kernel.org/r/20230112023852.42012-6-akihiko.odaki@daynix.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2023-01-12arm64/cache: Move CLIDR macro definitionsAkihiko Odaki
The macros are useful for KVM which needs to manage how CLIDR is exposed to vcpu so move them to include/asm/cache.h, which KVM can refer to. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Link: https://lore.kernel.org/r/20230112023852.42012-5-akihiko.odaki@daynix.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2023-01-12arm64/sysreg: Convert CCSIDR_EL1 to automatic generationAkihiko Odaki
Convert CCSIDR_EL1 to automatic generation as per DDI0487I.a. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230112023852.42012-3-akihiko.odaki@daynix.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2023-01-12arm64/cpufeature: Remove 4 bit assumption in ARM64_FEATURE_MASK()Mark Brown
The ARM64_FEATURE_MASK(), used extensively by KVM, assumes that all ID register fields are 4 bits wide but this is not the case any more, for example there are several 1 bit fields in ID_AA64SMFR0_EL1. Fortunately we now have generated constants for all the ID mask registers which can be used instead. Rather than create churn from updating existing users update the macro to reference the generated constants and replace the comment with a note advising against adding new users. There are also users of ARM64_FEATURE_FIELD_BITS in the pKVM code which will need to be fixed separately, since no relevant feature is planned to be exposed to protected guests in the immediate future there is no immediate issue with them assuming fields are 4 bits wide. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20221222-arm64-arm64-feature-mask-v1-1-c34c1e177f90@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-09arm64/mm: Define dummy pud_user_exec() when using 2-level page-tableWill Deacon
With only two levels of page-table, the generic 'pud_*' macros are implemented using dummy operations in pgtable-nopmd.h. Since commit 730a11f982e6 ("arm64/mm: add pud_user_exec() check in pud_user_accessible_page()"), pud_user_accessible_page() unconditionally calls pud_user_exec(), which is an arm64-specific helper and therefore isn't defined by pgtable-nopmd.h. This results in a build failure for configurations with only two levels of page table: arch/arm64/include/asm/pgtable.h: In function 'pud_user_accessible_page': >> arch/arm64/include/asm/pgtable.h:870:51: error: implicit declaration of function 'pud_user_exec'; did you mean 'pmd_user_exec'? [-Werror=implicit-function-declaration] 870 | return pud_leaf(pud) && (pud_user(pud) || pud_user_exec(pud)); | ^~~~~~~~~~~~~ | pmd_user_exec Fix the problem by defining pud_user_exec() as pud_user() in this case. Link: https://lore.kernel.org/r/202301080515.z6zEksU4-lkp@intel.com Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Will Deacon <will@kernel.org>
2023-01-06arm64: errata: Workaround possible Cortex-A715 [ESR|FAR]_ELx corruptionAnshuman Khandual
If a Cortex-A715 cpu sees a page mapping permissions change from executable to non-executable, it may corrupt the ESR_ELx and FAR_ELx registers, on the next instruction abort caused by permission fault. Only user-space does executable to non-executable permission transition via mprotect() system call which calls ptep_modify_prot_start() and ptep_modify _prot_commit() helpers, while changing the page mapping. The platform code can override these helpers via __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION. Work around the problem via doing a break-before-make TLB invalidation, for all executable user space mappings, that go through mprotect() system call. This overrides ptep_modify_prot_start() and ptep_modify_prot_commit(), via defining HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION on the platform thus giving an opportunity to intercept user space exec mappings, and do the necessary TLB invalidation. Similar interceptions are also implemented for HugeTLB. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20230102061651.34745-1-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2023-01-05arm64: cmpxchg_double*: hazard against entire exchange variableMark Rutland
The inline assembly for arm64's cmpxchg_double*() implementations use a +Q constraint to hazard against other accesses to the memory location being exchanged. However, the pointer passed to the constraint is a pointer to unsigned long, and thus the hazard only applies to the first 8 bytes of the location. GCC can take advantage of this, assuming that other portions of the location are unchanged, leading to a number of potential problems. This is similar to what we fixed back in commit: fee960bed5e857eb ("arm64: xchg: hazard against entire exchange variable") ... but we forgot to adjust cmpxchg_double*() similarly at the same time. The same problem applies, as demonstrated with the following test: | struct big { | u64 lo, hi; | } __aligned(128); | | unsigned long foo(struct big *b) | { | u64 hi_old, hi_new; | | hi_old = b->hi; | cmpxchg_double_local(&b->lo, &b->hi, 0x12, 0x34, 0x56, 0x78); | hi_new = b->hi; | | return hi_old ^ hi_new; | } ... which GCC 12.1.0 compiles as: | 0000000000000000 <foo>: | 0: d503233f paciasp | 4: aa0003e4 mov x4, x0 | 8: 1400000e b 40 <foo+0x40> | c: d2800240 mov x0, #0x12 // #18 | 10: d2800681 mov x1, #0x34 // #52 | 14: aa0003e5 mov x5, x0 | 18: aa0103e6 mov x6, x1 | 1c: d2800ac2 mov x2, #0x56 // #86 | 20: d2800f03 mov x3, #0x78 // #120 | 24: 48207c82 casp x0, x1, x2, x3, [x4] | 28: ca050000 eor x0, x0, x5 | 2c: ca060021 eor x1, x1, x6 | 30: aa010000 orr x0, x0, x1 | 34: d2800000 mov x0, #0x0 // #0 <--- BANG | 38: d50323bf autiasp | 3c: d65f03c0 ret | 40: d2800240 mov x0, #0x12 // #18 | 44: d2800681 mov x1, #0x34 // #52 | 48: d2800ac2 mov x2, #0x56 // #86 | 4c: d2800f03 mov x3, #0x78 // #120 | 50: f9800091 prfm pstl1strm, [x4] | 54: c87f1885 ldxp x5, x6, [x4] | 58: ca0000a5 eor x5, x5, x0 | 5c: ca0100c6 eor x6, x6, x1 | 60: aa0600a6 orr x6, x5, x6 | 64: b5000066 cbnz x6, 70 <foo+0x70> | 68: c8250c82 stxp w5, x2, x3, [x4] | 6c: 35ffff45 cbnz w5, 54 <foo+0x54> | 70: d2800000 mov x0, #0x0 // #0 <--- BANG | 74: d50323bf autiasp | 78: d65f03c0 ret Notice that at the lines with "BANG" comments, GCC has assumed that the higher 8 bytes are unchanged by the cmpxchg_double() call, and that `hi_old ^ hi_new` can be reduced to a constant zero, for both LSE and LL/SC versions of cmpxchg_double(). This patch fixes the issue by passing a pointer to __uint128_t into the +Q constraint, ensuring that the compiler hazards against the entire 16 bytes being modified. With this change, GCC 12.1.0 compiles the above test as: | 0000000000000000 <foo>: | 0: f9400407 ldr x7, [x0, #8] | 4: d503233f paciasp | 8: aa0003e4 mov x4, x0 | c: 1400000f b 48 <foo+0x48> | 10: d2800240 mov x0, #0x12 // #18 | 14: d2800681 mov x1, #0x34 // #52 | 18: aa0003e5 mov x5, x0 | 1c: aa0103e6 mov x6, x1 | 20: d2800ac2 mov x2, #0x56 // #86 | 24: d2800f03 mov x3, #0x78 // #120 | 28: 48207c82 casp x0, x1, x2, x3, [x4] | 2c: ca050000 eor x0, x0, x5 | 30: ca060021 eor x1, x1, x6 | 34: aa010000 orr x0, x0, x1 | 38: f9400480 ldr x0, [x4, #8] | 3c: d50323bf autiasp | 40: ca0000e0 eor x0, x7, x0 | 44: d65f03c0 ret | 48: d2800240 mov x0, #0x12 // #18 | 4c: d2800681 mov x1, #0x34 // #52 | 50: d2800ac2 mov x2, #0x56 // #86 | 54: d2800f03 mov x3, #0x78 // #120 | 58: f9800091 prfm pstl1strm, [x4] | 5c: c87f1885 ldxp x5, x6, [x4] | 60: ca0000a5 eor x5, x5, x0 | 64: ca0100c6 eor x6, x6, x1 | 68: aa0600a6 orr x6, x5, x6 | 6c: b5000066 cbnz x6, 78 <foo+0x78> | 70: c8250c82 stxp w5, x2, x3, [x4] | 74: 35ffff45 cbnz w5, 5c <foo+0x5c> | 78: f9400480 ldr x0, [x4, #8] | 7c: d50323bf autiasp | 80: ca0000e0 eor x0, x7, x0 | 84: d65f03c0 ret ... sampling the high 8 bytes before and after the cmpxchg, and performing an EOR, as we'd expect. For backporting, I've tested this atop linux-4.9.y with GCC 5.5.0. Note that linux-4.9.y is oldest currently supported stable release, and mandates GCC 5.1+. Unfortunately I couldn't get a GCC 5.1 binary to run on my machines due to library incompatibilities. I've also used a standalone test to check that we can use a __uint128_t pointer in a +Q constraint at least as far back as GCC 4.8.5 and LLVM 3.9.1. Fixes: 5284e1b4bc8a ("arm64: xchg: Implement cmpxchg_double") Fixes: e9a4b795652f ("arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU") Reported-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lore.kernel.org/lkml/Y6DEfQXymYVgL3oJ@boqun-archlinux/ Reported-by: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/lkml/Y6GXoO4qmH9OIZ5Q@hirez.programming.kicks-ass.net/ Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: stable@vger.kernel.org Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230104151626.3262137-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2023-01-05arm64/uprobes: change the uprobe_opcode_t typedef to fix the sparse warningjunhua huang
After we fixed the uprobe inst endian in aarch_be, the sparse check report the following warning info: sparse warnings: (new ones prefixed by >>) >> kernel/events/uprobes.c:223:25: sparse: sparse: restricted __le32 degrades to integer >> kernel/events/uprobes.c:574:56: sparse: sparse: incorrect type in argument 4 (different base types) @@ expected unsigned int [addressable] [usertype] opcode @@ got restricted __le32 [usertype] @@ kernel/events/uprobes.c:574:56: sparse: expected unsigned int [addressable] [usertype] opcode kernel/events/uprobes.c:574:56: sparse: got restricted __le32 [usertype] >> kernel/events/uprobes.c:1483:32: sparse: sparse: incorrect type in initializer (different base types) @@ expected unsigned int [usertype] insn @@ got restricted __le32 [usertype] @@ kernel/events/uprobes.c:1483:32: sparse: expected unsigned int [usertype] insn kernel/events/uprobes.c:1483:32: sparse: got restricted __le32 [usertype] use the __le32 to u32 for uprobe_opcode_t, to keep the same. Fixes: 60f07e22a73d ("arm64:uprobe fix the uprobe SWBP_INSN in big-endian") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: junhua huang <huang.junhua@zte.com.cn> Link: https://lore.kernel.org/r/202212280954121197626@zte.com.cn Signed-off-by: Will Deacon <will@kernel.org>
2023-01-05Merge branch kvm-arm64/s1ptw-write-fault into kvmarm-master/fixesMarc Zyngier
* kvm-arm64/s1ptw-write-fault: : . : Fix S1PTW fault handling that was until then always taken : as a write. From the cover letter: : : `Recent developments on the EFI front have resulted in guests that : simply won't boot if the page tables are in a read-only memslot and : that you're a bit unlucky in the way S2 gets paged in... The core : issue is related to the fact that we treat a S1PTW as a write, which : is close enough to what needs to be done. Until to get to RO memslots. : : The first patch fixes this and is definitely a stable candidate. It : splits the faulting of page tables in two steps (RO translation fault, : followed by a writable permission fault -- should it even happen). : The second one documents the slightly odd behaviour of PTW writes to : RO memslot, which do not result in a KVM_MMIO exit. The last patch is : totally optional, only tangentially related, and randomly repainting : stuff (maybe that's contagious, who knows)." : : . KVM: arm64: Convert FSC_* over to ESR_ELx_FSC_* KVM: arm64: Document the behaviour of S1PTW faults on RO memslots KVM: arm64: Fix S1PTW handling on RO memslots Signed-off-by: Marc Zyngier <maz@kernel.org>
2023-01-05KVM: arm64: vgic: Add Apple M2 cpus to the list of broken SEIS implementationsMarc Zyngier
I really hoped that Apple had fixed their not-quite-a-vgic implementation when moving from M1 to M2. Alas, it seems they didn't, and running a buggy EFI version results in the vgic generating SErrors outside of the guest and taking the host down. Apply the same workaround as for M1. Yes, this is all a bit crap. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230103095022.3230946-2-maz@kernel.org
2023-01-05arm64/mm: add pud_user_exec() check in pud_user_accessible_page()Liu Shixin
Add check for the executable case in pud_user_accessible_page() too like what we did for pte and pmd. Fixes: 42b2547137f5 ("arm64/mm: enable ARCH_SUPPORTS_PAGE_TABLE_CHECK") Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Liu Shixin <liushixin2@huawei.com> Link: https://lore.kernel.org/r/20221122123137.429686-1-liushixin2@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2023-01-05arm64/mm: fix incorrect file_map_count for invalid pmdLiu Shixin
The page table check trigger BUG_ON() unexpectedly when split hugepage: ------------[ cut here ]------------ kernel BUG at mm/page_table_check.c:119! Internal error: Oops - BUG: 00000000f2000800 [#1] SMP Dumping ftrace buffer: (ftrace buffer empty) Modules linked in: CPU: 7 PID: 210 Comm: transhuge-stres Not tainted 6.1.0-rc3+ #748 Hardware name: linux,dummy-virt (DT) pstate: 20000005 (nzCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : page_table_check_set.isra.0+0x398/0x468 lr : page_table_check_set.isra.0+0x1c0/0x468 [...] Call trace: page_table_check_set.isra.0+0x398/0x468 __page_table_check_pte_set+0x160/0x1c0 __split_huge_pmd_locked+0x900/0x1648 __split_huge_pmd+0x28c/0x3b8 unmap_page_range+0x428/0x858 unmap_single_vma+0xf4/0x1c8 zap_page_range+0x2b0/0x410 madvise_vma_behavior+0xc44/0xe78 do_madvise+0x280/0x698 __arm64_sys_madvise+0x90/0xe8 invoke_syscall.constprop.0+0xdc/0x1d8 do_el0_svc+0xf4/0x3f8 el0_svc+0x58/0x120 el0t_64_sync_handler+0xb8/0xc0 el0t_64_sync+0x19c/0x1a0 [...] On arm64, pmd_leaf() will return true even if the pmd is invalid due to pmd_present_invalid() check. So in pmdp_invalidate() the file_map_count will not only decrease once but also increase once. Then in set_pte_at(), the file_map_count increase again, and so trigger BUG_ON() unexpectedly. Add !pmd_present_invalid() check in pmd_user_accessible_page() to fix the problem. Fixes: 42b2547137f5 ("arm64/mm: enable ARCH_SUPPORTS_PAGE_TABLE_CHECK") Reported-by: Denys Vlasenko <dvlasenk@redhat.com> Signed-off-by: Liu Shixin <liushixin2@huawei.com> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20221121073608.4183459-1-liushixin2@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2023-01-03KVM: arm64: Convert FSC_* over to ESR_ELx_FSC_*Marc Zyngier
The former is an AArch32 legacy, so let's move over to the verbose (and strictly identical) version. This involves moving some of the #defines that were private to KVM into the more generic esr.h. Signed-off-by: Marc Zyngier <maz@kernel.org>
2023-01-03KVM: arm64: Fix S1PTW handling on RO memslotsMarc Zyngier
A recent development on the EFI front has resulted in guests having their page tables baked in the firmware binary, and mapped into the IPA space as part of a read-only memslot. Not only is this legitimate, but it also results in added security, so thumbs up. It is possible to take an S1PTW translation fault if the S1 PTs are unmapped at stage-2. However, KVM unconditionally treats S1PTW as a write to correctly handle hardware AF/DB updates to the S1 PTs. Furthermore, KVM injects an exception into the guest for S1PTW writes. In the aforementioned case this results in the guest taking an abort it won't recover from, as the S1 PTs mapping the vectors suffer from the same problem. So clearly our handling is... wrong. Instead, switch to a two-pronged approach: - On S1PTW translation fault, handle the fault as a read - On S1PTW permission fault, handle the fault as a write This is of no consequence to SW that *writes* to its PTs (the write will trigger a non-S1PTW fault), and SW that uses RO PTs will not use HW-assisted AF/DB anyway, as that'd be wrong. Only in the case described in c4ad98e4b72c ("KVM: arm64: Assume write fault on S1PTW permission fault on instruction fetch") do we end-up with two back-to-back faults (page being evicted and faulted back). I don't think this is a case worth optimising for. Fixes: c4ad98e4b72c ("KVM: arm64: Assume write fault on S1PTW permission fault on instruction fetch") Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Regression-tested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org
2022-12-29KVM: x86: Unify pr_fmt to use module name for all KVM modulesSean Christopherson
Define pr_fmt using KBUILD_MODNAME for all KVM x86 code so that printks use consistent formatting across common x86, Intel, and AMD code. In addition to providing consistent print formatting, using KBUILD_MODNAME, e.g. kvm_amd and kvm_intel, allows referencing SVM and VMX (and SEV and SGX and ...) as technologies without generating weird messages, and without causing naming conflicts with other kernel code, e.g. "SEV: ", "tdx: ", "sgx: " etc.. are all used by the kernel for non-KVM subsystems. Opportunistically move away from printk() for prints that need to be modified anyways, e.g. to drop a manual "kvm: " prefix. Opportunistically convert a few SGX WARNs that are similarly modified to WARN_ONCE; in the very unlikely event that the WARNs fire, odds are good that they would fire repeatedly and spam the kernel log without providing unique information in each print. Note, defining pr_fmt yields undesirable results for code that uses KVM's printk wrappers, e.g. vcpu_unimpl(). But, that's a pre-existing problem as SVM/kvm_amd already defines a pr_fmt, and thankfully use of KVM's wrappers is relatively limited in KVM x86 code. Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Paul Durrant <paul@xen.org> Message-Id: <20221130230934.1014142-35-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-12-29KVM: Drop arch hardware (un)setup hooksSean Christopherson
Drop kvm_arch_hardware_setup() and kvm_arch_hardware_unsetup() now that all implementations are nops. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Eric Farman <farman@linux.ibm.com> # s390 Acked-by: Anup Patel <anup@brainfault.org> Message-Id: <20221130230934.1014142-10-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-12-16Merge tag 'arm64-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: - Fix Kconfig dependencies to re-allow the enabling of function graph tracer and shadow call stacks at the same time. - Revert the workaround for CPU erratum #2645198 since the CONFIG_ guards were incorrect and the code has therefore not seen any real exposure in -next. * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: Revert "arm64: errata: Workaround possible Cortex-A715 [ESR|FAR]_ELx corruption" ftrace: Allow WITH_ARGS flavour of graph tracer with shadow call stack
2022-12-15Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm updates from Paolo Bonzini: "ARM64: - Enable the per-vcpu dirty-ring tracking mechanism, together with an option to keep the good old dirty log around for pages that are dirtied by something other than a vcpu. - Switch to the relaxed parallel fault handling, using RCU to delay page table reclaim and giving better performance under load. - Relax the MTE ABI, allowing a VMM to use the MAP_SHARED mapping option, which multi-process VMMs such as crosvm rely on (see merge commit 382b5b87a97d: "Fix a number of issues with MTE, such as races on the tags being initialised vs the PG_mte_tagged flag as well as the lack of support for VM_SHARED when KVM is involved. Patches from Catalin Marinas and Peter Collingbourne"). - Merge the pKVM shadow vcpu state tracking that allows the hypervisor to have its own view of a vcpu, keeping that state private. - Add support for the PMUv3p5 architecture revision, bringing support for 64bit counters on systems that support it, and fix the no-quite-compliant CHAIN-ed counter support for the machines that actually exist out there. - Fix a handful of minor issues around 52bit VA/PA support (64kB pages only) as a prefix of the oncoming support for 4kB and 16kB pages. - Pick a small set of documentation and spelling fixes, because no good merge window would be complete without those. s390: - Second batch of the lazy destroy patches - First batch of KVM changes for kernel virtual != physical address support - Removal of a unused function x86: - Allow compiling out SMM support - Cleanup and documentation of SMM state save area format - Preserve interrupt shadow in SMM state save area - Respond to generic signals during slow page faults - Fixes and optimizations for the non-executable huge page errata fix. - Reprogram all performance counters on PMU filter change - Cleanups to Hyper-V emulation and tests - Process Hyper-V TLB flushes from a nested guest (i.e. from a L2 guest running on top of a L1 Hyper-V hypervisor) - Advertise several new Intel features - x86 Xen-for-KVM: - Allow the Xen runstate information to cross a page boundary - Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured - Add support for 32-bit guests in SCHEDOP_poll - Notable x86 fixes and cleanups: - One-off fixes for various emulation flows (SGX, VMXON, NRIPS=0). - Reinstate IBPB on emulated VM-Exit that was incorrectly dropped a few years back when eliminating unnecessary barriers when switching between vmcs01 and vmcs02. - Clean up vmread_error_trampoline() to make it more obvious that params must be passed on the stack, even for x86-64. - Let userspace set all supported bits in MSR_IA32_FEAT_CTL irrespective of the current guest CPUID. - Fudge around a race with TSC refinement that results in KVM incorrectly thinking a guest needs TSC scaling when running on a CPU with a constant TSC, but no hardware-enumerated TSC frequency. - Advertise (on AMD) that the SMM_CTL MSR is not supported - Remove unnecessary exports Generic: - Support for responding to signals during page faults; introduces new FOLL_INTERRUPTIBLE flag that was reviewed by mm folks Selftests: - Fix an inverted check in the access tracking perf test, and restore support for asserting that there aren't too many idle pages when running on bare metal. - Fix build errors that occur in certain setups (unsure exactly what is unique about the problematic setup) due to glibc overriding static_assert() to a variant that requires a custom message. - Introduce actual atomics for clear/set_bit() in selftests - Add support for pinning vCPUs in dirty_log_perf_test. - Rename the so called "perf_util" framework to "memstress". - Add a lightweight psuedo RNG for guest use, and use it to randomize the access pattern and write vs. read percentage in the memstress tests. - Add a common ucall implementation; code dedup and pre-work for running SEV (and beyond) guests in selftests. - Provide a common constructor and arch hook, which will eventually be used by x86 to automatically select the right hypercall (AMD vs. Intel). - A bunch of added/enabled/fixed selftests for ARM64, covering memslots, breakpoints, stage-2 faults and access tracking. - x86-specific selftest changes: - Clean up x86's page table management. - Clean up and enhance the "smaller maxphyaddr" test, and add a related test to cover generic emulation failure. - Clean up the nEPT support checks. - Add X86_PROPERTY_* framework to retrieve multi-bit CPUID values. - Fix an ordering issue in the AMX test introduced by recent conversions to use kvm_cpu_has(), and harden the code to guard against similar bugs in the future. Anything that tiggers caching of KVM's supported CPUID, kvm_cpu_has() in this case, effectively hides opt-in XSAVE features if the caching occurs before the test opts in via prctl(). Documentation: - Remove deleted ioctls from documentation - Clean up the docs for the x86 MSR filter. - Various fixes" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (361 commits) KVM: x86: Add proper ReST tables for userspace MSR exits/flags KVM: selftests: Allocate ucall pool from MEM_REGION_DATA KVM: arm64: selftests: Align VA space allocator with TTBR0 KVM: arm64: Fix benign bug with incorrect use of VA_BITS KVM: arm64: PMU: Fix period computation for 64bit counters with 32bit overflow KVM: x86: Advertise that the SMM_CTL MSR is not supported KVM: x86: remove unnecessary exports KVM: selftests: Fix spelling mistake "probabalistic" -> "probabilistic" tools: KVM: selftests: Convert clear/set_bit() to actual atomics tools: Drop "atomic_" prefix from atomic test_and_set_bit() tools: Drop conflicting non-atomic test_and_{clear,set}_bit() helpers KVM: selftests: Use non-atomic clear/set bit helpers in KVM tests perf tools: Use dedicated non-atomic clear/set bit helpers tools: Take @bit as an "unsigned long" in {clear,set}_bit() helpers KVM: arm64: selftests: Enable single-step without a "full" ucall() KVM: x86: fix APICv/x2AVIC disabled when vm reboot by itself KVM: Remove stale comment about KVM_REQ_UNHALT KVM: Add missing arch for KVM_CREATE_DEVICE and KVM_{SET,GET}_DEVICE_ATTR KVM: Reference to kvm_userspace_memory_region in doc and comments KVM: Delete all references to removed KVM_SET_MEMORY_ALIAS ioctl ...