diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-01-10 20:34:00 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-01-10 20:34:00 -0800 |
commit | b35b6d4d71365fbfb6f2cc8edc331b3882ca817e (patch) | |
tree | 1b99ef00d7ad53f77002460e77c6048ae2c0c211 /drivers/cpufreq/intel_pstate.c | |
parent | bca21755b9fc00dbe371994b53389eb5d70b8e72 (diff) | |
parent | 78e6e4dfd8f0cbb477a6f9571123edcbd5873c28 (diff) |
Merge tag 'pm-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"The most signigicant change here is the addition of a new cpufreq
'P-state' driver for AMD processors as a better replacement for the
venerable acpi-cpufreq driver.
There are also other cpufreq updates (in the core, intel_pstate, ARM
drivers), PM core updates (mostly related to adding new macros for
declaring PM operations which should make the lives of driver
developers somewhat easier), and a bunch of assorted fixes and
cleanups.
Summary:
- Add new P-state driver for AMD processors (Huang Rui).
- Fix initialization of min and max frequency QoS requests in the
cpufreq core (Rafael Wysocki).
- Fix EPP handling on Alder Lake in intel_pstate (Srinivas
Pandruvada).
- Make intel_pstate update cpuinfo.max_freq when notified of HWP
capabilities changes and drop a redundant function call from that
driver (Rafael Wysocki).
- Improve IRQ support in the Qcom cpufreq driver (Ard Biesheuvel,
Stephen Boyd, Vladimir Zapolskiy).
- Fix double devm_remap() in the Mediatek cpufreq driver (Hector
Yuan).
- Introduce thermal pressure helpers for cpufreq CPU cooling (Lukasz
Luba).
- Make cpufreq use default_groups in kobj_type (Greg Kroah-Hartman).
- Make cpuidle use default_groups in kobj_type (Greg Kroah-Hartman).
- Fix two comments in cpuidle code (Jason Wang, Yang Li).
- Allow model-specific normal EPB value to be used in the intel_epb
sysfs attribute handling code (Srinivas Pandruvada).
- Simplify locking in pm_runtime_put_suppliers() (Rafael Wysocki).
- Add safety net to supplier device release in the runtime PM core
code (Rafael Wysocki).
- Capture device status before disabling runtime PM for it (Rafael
Wysocki).
- Add new macros for declaring PM operations to allow drivers to
avoid guarding them with CONFIG_PM #ifdefs or __maybe_unused and
update some drivers to use these macros (Paul Cercueil).
- Allow ACPI hardware signature to be honoured during restore from
hibernation (David Woodhouse).
- Update outdated operating performance points (OPP) documentation
(Tang Yizhou).
- Reduce log severity for informative message regarding frequency
transition failures in devfreq (Tzung-Bi Shih).
- Add DRAM frequency controller devfreq driver for Allwinner sunXi
SoCs (Samuel Holland).
- Add missing COMMON_CLK dependency to sun8i devfreq driver (Arnd
Bergmann).
- Add support for new layout of Psys PowerLimit Register on SPR to
the Intel RAPL power capping driver (Zhang Rui).
- Fix typo in a comment in idle_inject.c (Jason Wang).
- Remove unused function definition from the DTPM (Dynamit Thermal
Power Management) power capping framework (Daniel Lezcano).
- Reduce DTPM trace verbosity (Daniel Lezcano)"
* tag 'pm-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (53 commits)
x86, sched: Fix undefined reference to init_freq_invariance_cppc() build error
cpufreq: amd-pstate: Fix Kconfig dependencies for AMD P-State
cpufreq: amd-pstate: Fix struct amd_cpudata kernel-doc comment
cpuidle: use default_groups in kobj_type
x86: intel_epb: Allow model specific normal EPB value
MAINTAINERS: Add AMD P-State driver maintainer entry
Documentation: amd-pstate: Add AMD P-State driver introduction
cpufreq: amd-pstate: Add AMD P-State performance attributes
cpufreq: amd-pstate: Add AMD P-State frequencies attributes
cpufreq: amd-pstate: Add boost mode support for AMD P-State
cpufreq: amd-pstate: Add trace for AMD P-State module
cpufreq: amd-pstate: Introduce the support for the processors with shared memory solution
cpufreq: amd-pstate: Add fast switch function for AMD P-State
cpufreq: amd-pstate: Introduce a new AMD P-State driver to support future processors
ACPI: CPPC: Add CPPC enable register function
ACPI: CPPC: Check present CPUs for determining _CPC is valid
ACPI: CPPC: Implement support for SystemIO registers
x86/msr: Add AMD CPPC MSR definitions
x86/cpufeatures: Add AMD Collaborative Processor Performance Control feature flag
cpufreq: use default_groups in kobj_type
...
Diffstat (limited to 'drivers/cpufreq/intel_pstate.c')
-rw-r--r-- | drivers/cpufreq/intel_pstate.c | 121 |
1 files changed, 81 insertions, 40 deletions
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index dec2a5649ac1..bc7f7e6759bd 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -664,19 +664,29 @@ static int intel_pstate_set_epb(int cpu, s16 pref) * 3 balance_power * 4 power */ + +enum energy_perf_value_index { + EPP_INDEX_DEFAULT = 0, + EPP_INDEX_PERFORMANCE, + EPP_INDEX_BALANCE_PERFORMANCE, + EPP_INDEX_BALANCE_POWERSAVE, + EPP_INDEX_POWERSAVE, +}; + static const char * const energy_perf_strings[] = { - "default", - "performance", - "balance_performance", - "balance_power", - "power", + [EPP_INDEX_DEFAULT] = "default", + [EPP_INDEX_PERFORMANCE] = "performance", + [EPP_INDEX_BALANCE_PERFORMANCE] = "balance_performance", + [EPP_INDEX_BALANCE_POWERSAVE] = "balance_power", + [EPP_INDEX_POWERSAVE] = "power", NULL }; -static const unsigned int epp_values[] = { - HWP_EPP_PERFORMANCE, - HWP_EPP_BALANCE_PERFORMANCE, - HWP_EPP_BALANCE_POWERSAVE, - HWP_EPP_POWERSAVE +static unsigned int epp_values[] = { + [EPP_INDEX_DEFAULT] = 0, /* Unused index */ + [EPP_INDEX_PERFORMANCE] = HWP_EPP_PERFORMANCE, + [EPP_INDEX_BALANCE_PERFORMANCE] = HWP_EPP_BALANCE_PERFORMANCE, + [EPP_INDEX_BALANCE_POWERSAVE] = HWP_EPP_BALANCE_POWERSAVE, + [EPP_INDEX_POWERSAVE] = HWP_EPP_POWERSAVE, }; static int intel_pstate_get_energy_pref_index(struct cpudata *cpu_data, int *raw_epp) @@ -690,14 +700,14 @@ static int intel_pstate_get_energy_pref_index(struct cpudata *cpu_data, int *raw return epp; if (boot_cpu_has(X86_FEATURE_HWP_EPP)) { - if (epp == HWP_EPP_PERFORMANCE) - return 1; - if (epp == HWP_EPP_BALANCE_PERFORMANCE) - return 2; - if (epp == HWP_EPP_BALANCE_POWERSAVE) - return 3; - if (epp == HWP_EPP_POWERSAVE) - return 4; + if (epp == epp_values[EPP_INDEX_PERFORMANCE]) + return EPP_INDEX_PERFORMANCE; + if (epp == epp_values[EPP_INDEX_BALANCE_PERFORMANCE]) + return EPP_INDEX_BALANCE_PERFORMANCE; + if (epp == epp_values[EPP_INDEX_BALANCE_POWERSAVE]) + return EPP_INDEX_BALANCE_POWERSAVE; + if (epp == epp_values[EPP_INDEX_POWERSAVE]) + return EPP_INDEX_POWERSAVE; *raw_epp = epp; return 0; } else if (boot_cpu_has(X86_FEATURE_EPB)) { @@ -757,7 +767,7 @@ static int intel_pstate_set_energy_pref_index(struct cpudata *cpu_data, if (use_raw) epp = raw_epp; else if (epp == -EINVAL) - epp = epp_values[pref_index - 1]; + epp = epp_values[pref_index]; /* * To avoid confusion, refuse to set EPP to any values different @@ -843,7 +853,7 @@ static ssize_t store_energy_performance_preference( * upfront. */ if (!raw) - epp = ret ? epp_values[ret - 1] : cpu->epp_default; + epp = ret ? epp_values[ret] : cpu->epp_default; if (cpu->epp_cached != epp) { int err; @@ -1124,19 +1134,22 @@ static void intel_pstate_update_policies(void) cpufreq_update_policy(cpu); } +static void __intel_pstate_update_max_freq(struct cpudata *cpudata, + struct cpufreq_policy *policy) +{ + policy->cpuinfo.max_freq = global.turbo_disabled_mf ? + cpudata->pstate.max_freq : cpudata->pstate.turbo_freq; + refresh_frequency_limits(policy); +} + static void intel_pstate_update_max_freq(unsigned int cpu) { struct cpufreq_policy *policy = cpufreq_cpu_acquire(cpu); - struct cpudata *cpudata; if (!policy) return; - cpudata = all_cpu_data[cpu]; - policy->cpuinfo.max_freq = global.turbo_disabled_mf ? - cpudata->pstate.max_freq : cpudata->pstate.turbo_freq; - - refresh_frequency_limits(policy); + __intel_pstate_update_max_freq(all_cpu_data[cpu], policy); cpufreq_cpu_release(policy); } @@ -1584,8 +1597,15 @@ static void intel_pstate_notify_work(struct work_struct *work) { struct cpudata *cpudata = container_of(to_delayed_work(work), struct cpudata, hwp_notify_work); + struct cpufreq_policy *policy = cpufreq_cpu_acquire(cpudata->cpu); + + if (policy) { + intel_pstate_get_hwp_cap(cpudata); + __intel_pstate_update_max_freq(cpudata, policy); + + cpufreq_cpu_release(policy); + } - cpufreq_update_policy(cpudata->cpu); wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_STATUS, 0); } @@ -1679,10 +1699,18 @@ static void intel_pstate_hwp_enable(struct cpudata *cpudata) wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00); wrmsrl_on_cpu(cpudata->cpu, MSR_PM_ENABLE, 0x1); - if (cpudata->epp_default == -EINVAL) - cpudata->epp_default = intel_pstate_get_epp(cpudata, 0); intel_pstate_enable_hwp_interrupt(cpudata); + + if (cpudata->epp_default >= 0) + return; + + if (epp_values[EPP_INDEX_BALANCE_PERFORMANCE] == HWP_EPP_BALANCE_PERFORMANCE) { + cpudata->epp_default = intel_pstate_get_epp(cpudata, 0); + } else { + cpudata->epp_default = epp_values[EPP_INDEX_BALANCE_PERFORMANCE]; + intel_pstate_set_epp(cpudata, cpudata->epp_default); + } } static int atom_get_min_pstate(void) @@ -2486,18 +2514,14 @@ static void intel_pstate_update_perf_limits(struct cpudata *cpu, * HWP needs some special consideration, because HWP_REQUEST uses * abstract values to represent performance rather than pure ratios. */ - if (hwp_active) { - intel_pstate_get_hwp_cap(cpu); + if (hwp_active && cpu->pstate.scaling != perf_ctl_scaling) { + int scaling = cpu->pstate.scaling; + int freq; - if (cpu->pstate.scaling != perf_ctl_scaling) { - int scaling = cpu->pstate.scaling; - int freq; - - freq = max_policy_perf * perf_ctl_scaling; - max_policy_perf = DIV_ROUND_UP(freq, scaling); - freq = min_policy_perf * perf_ctl_scaling; - min_policy_perf = DIV_ROUND_UP(freq, scaling); - } + freq = max_policy_perf * perf_ctl_scaling; + max_policy_perf = DIV_ROUND_UP(freq, scaling); + freq = min_policy_perf * perf_ctl_scaling; + min_policy_perf = DIV_ROUND_UP(freq, scaling); } pr_debug("cpu:%d min_policy_perf:%d max_policy_perf:%d\n", @@ -3349,6 +3373,16 @@ static bool intel_pstate_hwp_is_enabled(void) return !!(value & 0x1); } +static const struct x86_cpu_id intel_epp_balance_perf[] = { + /* + * Set EPP value as 102, this is the max suggested EPP + * which can result in one core turbo frequency for + * AlderLake Mobile CPUs. + */ + X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, 102), + {} +}; + static int __init intel_pstate_init(void) { static struct cpudata **_all_cpu_data; @@ -3438,6 +3472,13 @@ hwp_cpu_matched: intel_pstate_sysfs_expose_params(); + if (hwp_active) { + const struct x86_cpu_id *id = x86_match_cpu(intel_epp_balance_perf); + + if (id) + epp_values[EPP_INDEX_BALANCE_PERFORMANCE] = id->driver_data; + } + mutex_lock(&intel_pstate_driver_lock); rc = intel_pstate_register_driver(default_driver); mutex_unlock(&intel_pstate_driver_lock); |