summaryrefslogtreecommitdiff
path: root/arch/x86/kernel/cpu/mtrr
AgeCommit message (Collapse)Author
2024-08-22x86/cpu: KVM: Add common defines for architectural memory types (PAT, MTRRs, ↵Sean Christopherson
etc.) Add defines for the architectural memory types that can be shoved into various MSRs and registers, e.g. MTRRs, PAT, VMX capabilities MSRs, EPTPs, etc. While most MSRs/registers support only a subset of all memory types, the values themselves are architectural and identical across all users. Leave the goofy MTRR_TYPE_* definitions as-is since they are in a uapi header, but add compile-time assertions to connect the dots (and sanity check that the msr-index.h values didn't get fat-fingered). Keep the VMX_EPTP_MT_* defines so that it's slightly more obvious that the EPTP holds a single memory type in 3 of its 64 bits; those bits just happen to be 2:0, i.e. don't need to be shifted. Opportunistically use X86_MEMTYPE_WB instead of an open coded '6' in setup_vmcs_config(). No functional change intended. Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Link: https://lore.kernel.org/r/20240605231918.2915961-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-08-08x86/mtrr: Check if fixed MTRRs exist before saving themAndi Kleen
MTRRs have an obsolete fixed variant for fine grained caching control of the 640K-1MB region that uses separate MSRs. This fixed variant has a separate capability bit in the MTRR capability MSR. So far all x86 CPUs which support MTRR have this separate bit set, so it went unnoticed that mtrr_save_state() does not check the capability bit before accessing the fixed MTRR MSRs. Though on a CPU that does not support the fixed MTRR capability this results in a #GP. The #GP itself is harmless because the RDMSR fault is handled gracefully, but results in a WARN_ON(). Add the missing capability check to prevent this. Fixes: 2b1f6278d77c ("[PATCH] x86: Save the MTRRs of the BSP before booting an AP") Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20240808000244.946864-1-ak@linux.intel.com
2024-04-04x86/CPU/AMD: Track SNP host status with cc_platform_*()Borislav Petkov (AMD)
The host SNP worthiness can determined later, after alternatives have been patched, in snp_rmptable_init() depending on cmdline options like iommu=pt which is incompatible with SNP, for example. Which means that one cannot use X86_FEATURE_SEV_SNP and will need to have a special flag for that control. Use that newly added CC_ATTR_HOST_SEV_SNP in the appropriate places. Move kdump_sev_callback() to its rightful place, while at it. Fixes: 216d106c7ff7 ("x86/sev: Add SEV-SNP host initialization support") Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Tested-by: Srikanth Aithal <sraithal@amd.com> Link: https://lore.kernel.org/r/20240327154317.29909-6-bp@alien8.de
2024-01-29x86/mtrr: Don't print errors if MtrrFixDramModEn is set when SNP enabledAshish Kalra
SNP enabled platforms require the MtrrFixDramModeEn bit to be set across all CPUs when SNP is enabled. Therefore, don't print error messages when MtrrFixDramModeEn is set when bringing CPUs online. Closes: https://lore.kernel.org/kvm/68b2d6bf-bce7-47f9-bebb-2652cc923ff9@linux.microsoft.com/ Reported-by: Jeremi Piotrowski <jpiotrowski@linux.microsoft.com> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Signed-off-by: Michael Roth <michael.roth@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240126041126.1927228-6-michael.roth@amd.com
2023-11-20x86/mtrr: Document missing function parameters in kernel-docBorislav Petkov (AMD)
Add text explaining what they do. No functional changes. Closes: https://lore.kernel.org/oe-kbuild-all/202311130104.9xKAKzke-lkp@intel.com/ Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/202311130104.9xKAKzke-lkp@intel.com
2023-06-01x86/mtrr: Unify debugging printingBorislav Petkov (AMD)
Put all the debugging output behind "mtrr=debug" and get rid of "mtrr_cleanup_debug" which wasn't even documented anywhere. No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20230531174857.GDZHeIib57h5lT5Vh1@fat_crate.local
2023-06-01x86/mtrr: Remove unused codeJuergen Gross
mtrr_centaur_report_mcr() isn't used by anyone, so it can be removed. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-17-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Don't let mtrr_type_lookup() return MTRR_TYPE_INVALIDJuergen Gross
mtrr_type_lookup() should always return a valid memory type. In case there is no information available, it should return the default UC. This will remove the last case where mtrr_type_lookup() can return MTRR_TYPE_INVALID, so adjust the comment in include/uapi/asm/mtrr.h. Note that removing the MTRR_TYPE_INVALID #define from that header could break user code, so it has to stay. At the same time the mtrr_type_lookup() stub for the !CONFIG_MTRR case should set uniform to 1, as if the memory range would be covered by no MTRR at all. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-15-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Use new cache_map in mtrr_type_lookup()Juergen Gross
Instead of crawling through the MTRR register state, use the new cache_map for looking up the cache type(s) of a memory region. This allows now to set the uniform parameter according to the uniformity of the cache mode of the region, instead of setting it only if the complete region is mapped by a single MTRR. This now includes even the region covered by the fixed MTRR registers. Make sure uniform is always set. [ bp: Massage. ] [ jgross: Explain mtrr_type_lookup() logic. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-14-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Add mtrr=debug command line optionJuergen Gross
Add a new command line option "mtrr=debug" for getting debug output after building the new cache mode map. The output will include MTRR register values and the resulting map. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-13-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Construct a memory map with cache modesJuergen Gross
After MTRR initialization construct a memory map with cache modes from MTRR values. This will speed up lookups via mtrr_lookup_type() especially in case of overlapping MTRRs. This will be needed when switching the semantics of the "uniform" parameter of mtrr_lookup_type() from "only covered by one MTRR" to "memory range has a uniform cache mode", which is the data the callers really want to know. Today this information is not easily available, in case MTRRs are not well sorted regarding base address. The map will be built in __initdata. When memory management is up, the map will be moved to dynamically allocated memory, in order to avoid the need of an overly large array. The size of this array is calculated using the number of variable MTRR registers and the needed size for fixed entries. Only add the map creation and expansion for now. The lookup will be added later. When writing new MTRR entries in the running system rebuild the map inside the call from mtrr_rendezvous_handler() in order to avoid nasty race conditions with concurrent lookups. [ bp: Move out rebuild_map() call and rename it. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-12-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Add get_effective_type() service functionJuergen Gross
Add a service function for obtaining the effective cache mode of overlapping MTRR registers. Make use of that function in check_type_overlap(). Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-11-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Allocate mtrr_value array dynamicallyJuergen Gross
The mtrr_value[] array is a static variable which is used only in a few configurations. Consuming 6kB is ridiculous for this case, especially as the array doesn't need to be that large and it can easily be allocated dynamically. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-10-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Move 32-bit code from mtrr.c to legacy.cJuergen Gross
There is some code in mtrr.c which is relevant for old 32-bit CPUs only. Move it to a new source legacy.c. While modifying mtrr_init_finalize() fix spelling of its name. Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-9-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Have only one set_mtrr() variantJuergen Gross
Today there are two variants of set_mtrr(): one calling stop_machine() and one calling stop_machine_cpuslocked(). The first one (set_mtrr()) has only one caller, and this caller is running only when resuming from suspend when the interrupts are still off and only one CPU is active. Additionally this code is used only on rather old 32-bit CPUs not supporting SMP. For these reasons the first variant can be replaced by a simple call of mtrr_if->set(). Rename the second variant set_mtrr_cpuslocked() to set_mtrr() now that there is only one variant left, in order to have a shorter function name. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-8-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Replace vendor tests in MTRR codeJuergen Gross
Modern CPUs all share the same MTRR interface implemented via generic_mtrr_ops. At several places in MTRR code this generic interface is deduced via is_cpu(INTEL) tests, which is only working due to X86_VENDOR_INTEL being 0 (the is_cpu() macro is testing mtrr_if->vendor, which isn't explicitly set in generic_mtrr_ops). Test the generic CPU feature X86_FEATURE_MTRR instead. The only other place where the .vendor member of struct mtrr_ops is being used is in set_num_var_ranges(), where depending on the vendor the number of MTRR registers is determined. This can easily be changed by replacing .vendor with the static number of MTRR registers. It should be noted that the test "is_cpu(HYGON)" wasn't ever returning true, as there is no struct mtrr_ops with that vendor information. [ bp: Use mtrr_enabled() before doing mtrr_if-> accesses, esp. in mtrr_trim_uncached_memory() which gets called independently from whether mtrr_if is set or not. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230502120931.20719-7-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Support setting MTRR state for software defined MTRRsJuergen Gross
When running virtualized, MTRR access can be reduced (e.g. in Xen PV guests or when running as a SEV-SNP guest under Hyper-V). Typically, the hypervisor will not advertize the MTRR feature in CPUID data, resulting in no MTRR memory type information being available for the kernel. This has turned out to result in problems (Link tags below): - Hyper-V SEV-SNP guests using uncached mappings where they shouldn't - Xen PV dom0 mapping memory as WB which should be UC- instead Solve those problems by allowing an MTRR static state override, overwriting the empty state used today. In case such a state has been set, don't call get_mtrr_state() in mtrr_bp_init(). The set state will only be used by mtrr_type_lookup(), as in all other cases mtrr_enabled() is being checked, which will return false. Accept the overwrite call only for selected cases when running as a guest. Disable X86_FEATURE_MTRR in order to avoid any MTRR modifications by just refusing them. [ bp: Massage. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/all/4fe9541e-4d4c-2b2a-f8c8-2d34a7284930@nerdbynature.de/ Link: https://lore.kernel.org/lkml/BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Replace size_or_mask and size_and_mask with a much easier conceptJuergen Gross
Replace size_or_mask and size_and_mask with the much easier concept of high reserved bits. While at it, instead of using constants in the MTRR code, use some new [ bp: - Drop mtrr_set_mask() - Unbreak long lines - Move struct mtrr_state_type out of the uapi header as it doesn't belong there. It also fixes a HDRTEST breakage "unknown type name ‘bool’" as Reported-by: kernel test robot <lkp@intel.com> - Massage. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230502120931.20719-3-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-05-09x86/mtrr: Remove physical address size calculationJuergen Gross
The physical address width calculation in mtrr_bp_init() can easily be replaced with using the already available value x86_phys_bits from struct cpuinfo_x86. The same information source can be used in mtrr/cleanup.c, removing the need to pass that value on to mtrr_cleanup(). In print_mtrr_state() use x86_phys_bits instead of recalculating it from size_or_mask. Move setting of size_or_mask and size_and_mask into a dedicated new function in mtrr/generic.c, enabling to make those 2 variables static, as they are used in generic.c only now. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-2-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2022-12-05x86/mtrr: Make message for disabled MTRRs more descriptiveJuergen Gross
Instead of just saying "Disabled" when MTRRs are disabled for any reason, tell what is disabled and why. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20221205080433.16643-3-jgross@suse.com
2022-11-10x86/mtrr: Simplify mtrr_ops initializationJuergen Gross
The way mtrr_if is initialized with the correct mtrr_ops structure is quite weird. Simplify that by dropping the vendor specific init functions and the mtrr_ops[] array. Replace those with direct assignments of the related vendor specific ops array to mtrr_if. Note that a direct assignment is okay even for 64-bit builds, where the symbol isn't present, as the related code will be subject to "dead code elimination" due to how cpu_feature_enabled() is implemented. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-17-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86: Decouple PAT and MTRR handlingJuergen Gross
Today, PAT is usable only with MTRR being active, with some nasty tweaks to make PAT usable when running as a Xen PV guest which doesn't support MTRR. The reason for this coupling is that both PAT MSR changes and MTRR changes require a similar sequence and so full PAT support was added using the already available MTRR handling. Xen PV PAT handling can work without MTRR, as it just needs to consume the PAT MSR setting done by the hypervisor without the ability and need to change it. This in turn has resulted in a convoluted initialization sequence and wrong decisions regarding cache mode availability due to misguiding PAT availability flags. Fix all of that by allowing to use PAT without MTRR and by reworking the current PAT initialization sequence to match better with the newly introduced generic cache initialization. This removes the need of the recently added pat_force_disabled flag, so remove the remnants of the patch adding it. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-14-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Add a stop_machine() handler calling only cache_cpu_init()Juergen Gross
Instead of having a stop_machine() handler for either a specific MTRR register or all state at once, add a handler just for calling cache_cpu_init() if appropriate. Add functions for calling stop_machine() with this handler as well. Add a generic replacement for mtrr_bp_restore() and a wrapper for mtrr_bp_init(). Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-13-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Let cache_aps_delayed_init replace mtrr_aps_delayed_initJuergen Gross
In order to prepare decoupling MTRR and PAT replace the MTRR-specific mtrr_aps_delayed_init flag with a more generic cache_aps_delayed_init one. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-12-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Get rid of __mtrr_enabled boolJuergen Gross
There is no need for keeping __mtrr_enabled as it can easily be replaced by testing mtrr_if to be not NULL. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-11-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Simplify mtrr_bp_init()Juergen Gross
In case of the generic cache interface being used (Intel CPUs or a 64-bit system), the initialization sequence of the boot CPU is more complicated than necessary: - check if MTRR enabled, if yes, call mtrr_bp_pat_init() which will disable caching, set the PAT MSR, and reenable caching - call mtrr_cleanup(), in case that changed anything, call cache_cpu_init() doing the same caching disable/enable dance as above, but this time with setting the (modified) MTRR state (even if MTRR was disabled) AND setting the PAT MSR (again even with disabled MTRR) The sequence can be simplified a lot while removing potential inconsistencies: - check if MTRR enabled, if yes, call mtrr_cleanup() and then cache_cpu_init() This ensures to: - no longer disable/enable caching more than once - avoid to set MTRRs and/or the PAT MSR on the boot processor in case of MTRR cleanups even if MTRRs meant to be disabled With that mtrr_bp_pat_init() can be removed. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-10-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Remove set_all callback from struct mtrr_opsJuergen Gross
Instead of using an indirect call to mtrr_if->set_all just call the only possible target cache_cpu_init() directly. Remove the set_all function pointer from struct mtrr_ops. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-9-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Disentangle MTRR init from PAT initJuergen Gross
Add a main cache_cpu_init() init routine which initializes MTRR and/or PAT support depending on what has been detected on the system. Leave the MTRR-specific initialization in a MTRR-specific init function where the smp_changes_mask setting happens now with caches disabled. This global mask update was done with caches enabled before probably because atomic operations while running uncached might have been quite expensive. But since only systems with a broken BIOS should ever require to set any bit in smp_changes_mask, hurting those devices with a penalty of a few microseconds during boot shouldn't be a real issue. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-8-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Move cache control code to cacheinfo.cJuergen Gross
Prepare making PAT and MTRR support independent from each other by moving some code needed by both out of the MTRR-specific sources. [ bp: Massage commit message. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-7-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Split MTRR-specific handling from cache dis/enablingJuergen Gross
Split the MTRR-specific actions from cache_disable() and cache_enable() into new functions mtrr_disable() and mtrr_enable(). Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-6-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Rename prepare_set() and post_set()Juergen Gross
Rename the currently MTRR-specific functions prepare_set() and post_set() in preparation to move them. Make them non-static and put their prototypes into cacheinfo.h, where they will end after moving them to their final position anyway. Expand the comment before the functions with an introductory line and rename two related static variables, too. [ bp: Massage commit message. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-5-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-11-10x86/mtrr: Replace use_intel() with a local flagJuergen Gross
In MTRR code use_intel() is only used in one source file, and the relevant use_intel_if member of struct mtrr_ops is set only in generic_mtrr_ops. Replace use_intel() with a single flag in cacheinfo.c which can be set when assigning generic_mtrr_ops to mtrr_if. This allows to drop use_intel_if from mtrr_ops, while preparing to decouple PAT from MTRR. As another preparation for the PAT/MTRR decoupling use a bit for MTRR control and one for PAT control. For now set both bits together, this can be changed later. As the new flag will be set only if mtrr_enabled is set, the test for mtrr_enabled can be dropped at some places. [ bp: Massage commit message. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-4-jgross@suse.com Signed-off-by: Borislav Petkov <bp@suse.de>
2022-10-20x86/mtrr: Remove unused cyrix_set_all() functionJuergen Gross
The Cyrix CPU specific MTRR function cyrix_set_all() will never be called as the mtrr_ops->set_all() callback will only be called in the use_intel() case, which would require the use_intel_if member of struct mtrr_ops to be set, which isn't the case for Cyrix. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221004081023.32402-3-jgross@suse.com
2022-10-19x86/mtrr: Add comment for set_mtrr_state() serializationJuergen Gross
Add a comment about set_mtrr_state() needing serialization. [ bp: Touchups. ] Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20220820092533.29420-2-jgross@suse.com
2021-08-10x86/mtrr: Replace deprecated CPU-hotplug functions.Sebastian Andrzej Siewior
The functions get_online_cpus() and put_online_cpus() have been deprecated during the CPU hotplug rework. They map directly to cpus_read_lock() and cpus_read_unlock(). Replace deprecated CPU-hotplug functions with the official version. The behavior remains unchanged. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210803141621.780504-8-bigeasy@linutronix.de
2021-05-10x86/msr: Rename MSR_K8_SYSCFG to MSR_AMD64_SYSCFGBrijesh Singh
The SYSCFG MSR continued being updated beyond the K8 family; drop the K8 name from it. Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Joerg Roedel <jroedel@suse.de> Link: https://lkml.kernel.org/r/20210427111636.1207-4-brijesh.singh@amd.com
2021-03-21x86: Fix various typos in comments, take #2Ingo Molnar
Fix another ~42 single-word typos in arch/x86/ code comments, missed a few in the first pass, in particular in .S files. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: linux-kernel@vger.kernel.org
2021-03-18x86: Fix various typos in commentsIngo Molnar
Fix ~144 single-word typos in arch/x86/ code comments. Doing this in a single commit should reduce the churn. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: linux-kernel@vger.kernel.org
2021-02-20Merge tag 'x86_mm_for_v5.12' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 mm cleanups from Borislav Petkov: - PTRACE_GETREGS/PTRACE_PUTREGS regset selection cleanup - Another initial cleanup - more to follow - to the fault handling code. - Other minor cleanups and corrections. * tag 'x86_mm_for_v5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (23 commits) x86/{fault,efi}: Fix and rename efi_recover_from_page_fault() x86/fault: Don't run fixups for SMAP violations x86/fault: Don't look for extable entries for SMEP violations x86/fault: Rename no_context() to kernelmode_fixup_or_oops() x86/fault: Bypass no_context() for implicit kernel faults from usermode x86/fault: Split the OOPS code out from no_context() x86/fault: Improve kernel-executing-user-memory handling x86/fault: Correct a few user vs kernel checks wrt WRUSS x86/fault: Document the locking in the fault_signal_pending() path x86/fault/32: Move is_f00f_bug() to do_kern_addr_fault() x86/fault: Fold mm_fault_error() into do_user_addr_fault() x86/fault: Skip the AMD erratum #91 workaround on unaffected CPUs x86/fault: Fix AMD erratum #91 errata fixup for user code x86/Kconfig: Remove HPET_EMULATE_RTC depends on RTC x86/asm: Fixup TASK_SIZE_MAX comment x86/ptrace: Clean up PTRACE_GETREGS/PTRACE_PUTREGS regset selection x86/vm86/32: Remove VM86_SCREEN_BITMAP support x86: Remove definition of DEBUG x86/entry: Remove now unused do_IRQ() declaration x86/mm: Remove duplicate definition of _PAGE_PAT_LARGE ...
2021-01-15x86: Remove definition of DEBUGTom Rix
Defining DEBUG should only be done in development. So remove it. Signed-off-by: Tom Rix <trix@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Link: https://lkml.kernel.org/r/20210114212827.47584-1-trix@redhat.com
2021-01-06x86/mtrr: Correct the range check before performing MTRR type lookupsYing-Tsun Huang
In mtrr_type_lookup(), if the input memory address region is not in the MTRR, over 4GB, and not over the top of memory, a write-back attribute is returned. These condition checks are for ensuring the input memory address region is actually mapped to the physical memory. However, if the end address is just aligned with the top of memory, the condition check treats the address is over the top of memory, and write-back attribute is not returned. And this hits in a real use case with NVDIMM: the nd_pmem module tries to map NVDIMMs as cacheable memories when NVDIMMs are connected. If a NVDIMM is the last of the DIMMs, the performance of this NVDIMM becomes very low since it is aligned with the top of memory and its memory type is uncached-minus. Move the input end address change to inclusive up into mtrr_type_lookup(), before checking for the top of memory in either mtrr_type_lookup_{variable,fixed}() helpers. [ bp: Massage commit message. ] Fixes: 0cc705f56e40 ("x86/mm/mtrr: Clean up mtrr_type_lookup()") Signed-off-by: Ying-Tsun Huang <ying-tsun.huang@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201215070721.4349-1-ying-tsun.huang@amd.com
2020-12-30x86/mtrr: Convert comma to semicolonZheng Yongjun
Replace a comma between expression statements with a semicolon. Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20201216131159.14393-1-zhengyongjun3@huawei.com
2020-12-14Merge tag 'core-rcu-2020-12-14' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull RCU updates from Thomas Gleixner: "RCU, LKMM and KCSAN updates collected by Paul McKenney. RCU: - Avoid cpuinfo-induced IPI pileups and idle-CPU IPIs - Lockdep-RCU updates reducing the need for __maybe_unused - Tasks-RCU updates - Miscellaneous fixes - Documentation updates - Torture-test updates KCSAN: - updates for selftests, avoiding setting watchpoints on NULL pointers - fix to watchpoint encoding LKMM: - updates for documentation along with some updates to example-code litmus tests" * tag 'core-rcu-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (72 commits) srcu: Take early exit on memory-allocation failure rcu/tree: Defer kvfree_rcu() allocation to a clean context rcu: Do not report strict GPs for outgoing CPUs rcu: Fix a typo in rcu_blocking_is_gp() header comment rcu: Prevent lockdep-RCU splats on lock acquisition/release rcu/tree: nocb: Avoid raising softirq for offloaded ready-to-execute CBs rcu,ftrace: Fix ftrace recursion rcu/tree: Make struct kernel_param_ops definitions const rcu/tree: Add a warning if CPU being onlined did not report QS already rcu: Clarify nocb kthreads naming in RCU_NOCB_CPU config rcu: Fix single-CPU check in rcu_blocking_is_gp() rcu: Implement rcu_segcblist_is_offloaded() config dependent list.h: Update comment to explicitly note circular lists rcu: Panic after fixed number of stalls x86/smpboot: Move rcu_cpu_starting() earlier rcu: Allow rcu_irq_enter_check_tick() from NMI tools/memory-model: Label MP tests' producers and consumers tools/memory-model: Use "buf" and "flag" for message-passing tests tools/memory-model: Add types to litmus tests tools/memory-model: Add a glossary of LKMM terms ...
2020-11-19x86/smpboot: Move rcu_cpu_starting() earlierPaul E. McKenney
The call to rcu_cpu_starting() in mtrr_ap_init() is not early enough in the CPU-hotplug onlining process, which results in lockdep splats as follows: ============================= WARNING: suspicious RCU usage 5.9.0+ #268 Not tainted ----------------------------- kernel/kprobes.c:300 RCU-list traversed in non-reader section!! other info that might help us debug this: RCU used illegally from offline CPU! rcu_scheduler_active = 1, debug_locks = 1 no locks held by swapper/1/0. stack backtrace: CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.9.0+ #268 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.10.2-1ubuntu1 04/01/2014 Call Trace: dump_stack+0x77/0x97 __is_insn_slot_addr+0x15d/0x170 kernel_text_address+0xba/0xe0 ? get_stack_info+0x22/0xa0 __kernel_text_address+0x9/0x30 show_trace_log_lvl+0x17d/0x380 ? dump_stack+0x77/0x97 dump_stack+0x77/0x97 __lock_acquire+0xdf7/0x1bf0 lock_acquire+0x258/0x3d0 ? vprintk_emit+0x6d/0x2c0 _raw_spin_lock+0x27/0x40 ? vprintk_emit+0x6d/0x2c0 vprintk_emit+0x6d/0x2c0 printk+0x4d/0x69 start_secondary+0x1c/0x100 secondary_startup_64_no_verify+0xb8/0xbb This is avoided by moving the call to rcu_cpu_starting up near the beginning of the start_secondary() function. Note that the raw_smp_processor_id() is required in order to avoid calling into lockdep before RCU has declared the CPU to be watched for readers. Link: https://lore.kernel.org/lkml/160223032121.7002.1269740091547117869.tip-bot2@tip-bot2/ Reported-by: Qian Cai <cai@redhat.com> Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-11-02x86/mtrr: Fix a kernel-doc markupMauro Carvalho Chehab
Kernel-doc markup should use this format: identifier - description Fix it. Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/2217cd4ae9e561da2825485eb97de77c65741489.1603469755.git.mchehab+huawei@kernel.org
2020-08-23treewide: Use fallthrough pseudo-keywordGustavo A. R. Silva
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
2020-04-26x86/tlb: Move __flush_tlb() out of lineThomas Gleixner
cpu_tlbstate is exported because various TLB-related functions need access to it, but cpu_tlbstate is sensitive information which should only be accessed by well-contained kernel functions and not be directly exposed to modules. As a first step, move __flush_tlb() out of line and hide the native function. The latter can be static when CONFIG_PARAVIRT is disabled. Consolidate the namespace while at it and remove the pointless extra wrapper in the paravirt code. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200421092559.246130908@linutronix.de
2020-02-04proc: convert everything to "struct proc_ops"Alexey Dobriyan
The most notable change is DEFINE_SHOW_ATTRIBUTE macro split in seq_file.h. Conversion rule is: llseek => proc_lseek unlocked_ioctl => proc_ioctl xxx => proc_xxx delete ".owner = THIS_MODULE" line [akpm@linux-foundation.org: fix drivers/isdn/capi/kcapi_proc.c] [sfr@canb.auug.org.au: fix kernel/sched/psi.c] Link: http://lkml.kernel.org/r/20200122180545.36222f50@canb.auug.org.au Link: http://lkml.kernel.org/r/20191225172546.GB13378@avx2 Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-01-28Merge branch 'x86-mtrr-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 mtrr updates from Ingo Molnar: "Two changes: restrict /proc/mtrr to CAP_SYS_ADMIN, plus a cleanup" * 'x86-mtrr-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mtrr: Require CAP_SYS_ADMIN for all access x86/mtrr: Get rid of mtrr_seq_show() forward declaration
2019-12-10x86/mm/pat: Rename <asm/pat.h> => <asm/memtype.h>Ingo Molnar
pat.h is a file whose main purpose is to provide the memtype_*() APIs. PAT is the low level hardware mechanism - but the high level abstraction is memtype. So name the header <memtype.h> as well - this goes hand in hand with memtype.c and memtype_interval.c. Signed-off-by: Ingo Molnar <mingo@kernel.org>