diff options
author | Rafael J. Wysocki <rafael.j.wysocki@intel.com> | 2017-12-16 02:05:48 +0100 |
---|---|---|
committer | Rafael J. Wysocki <rafael.j.wysocki@intel.com> | 2017-12-16 02:05:48 +0100 |
commit | c51a024e3913e9dbaf4dfcb9aaba825668a89ace (patch) | |
tree | 9772af495ced0bde0d3c58b6e6fb5b395b3afae9 | |
parent | 3487972d7fa6c5143951436ada5933dcf0ec659d (diff) | |
parent | 34fb8f0ba9ceea88e116688f9f53e3802c38aafb (diff) |
Merge back PM core material for v4.16.
-rw-r--r-- | Documentation/driver-api/pm/devices.rst | 27 | ||||
-rw-r--r-- | Documentation/power/pci.txt | 11 | ||||
-rw-r--r-- | drivers/acpi/device_pm.c | 27 | ||||
-rw-r--r-- | drivers/base/power/main.c | 102 | ||||
-rw-r--r-- | drivers/base/power/sysfs.c | 182 | ||||
-rw-r--r-- | drivers/pci/pci-driver.c | 19 | ||||
-rw-r--r-- | include/linux/pm.h | 16 |
7 files changed, 252 insertions, 132 deletions
diff --git a/Documentation/driver-api/pm/devices.rst b/Documentation/driver-api/pm/devices.rst index 53c1b0b06da5..b0fe63c91f8d 100644 --- a/Documentation/driver-api/pm/devices.rst +++ b/Documentation/driver-api/pm/devices.rst @@ -788,6 +788,29 @@ must reflect the "active" status for runtime PM in that case. During system-wide resume from a sleep state it's easiest to put devices into the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`. -Refer to that document for more information regarding this particular issue as +[Refer to that document for more information regarding this particular issue as well as for information on the device runtime power management framework in -general. +general.] + +However, it often is desirable to leave devices in suspend after system +transitions to the working state, especially if those devices had been in +runtime suspend before the preceding system-wide suspend (or analogous) +transition. Device drivers can use the ``DPM_FLAG_LEAVE_SUSPENDED`` flag to +indicate to the PM core (and middle-layer code) that they prefer the specific +devices handled by them to be left suspended and they have no problems with +skipping their system-wide resume callbacks for this reason. Whether or not the +devices will actually be left in suspend may depend on their state before the +given system suspend-resume cycle and on the type of the system transition under +way. In particular, devices are not left suspended if that transition is a +restore from hibernation, as device states are not guaranteed to be reflected +by the information stored in the hibernation image in that case. + +The middle-layer code involved in the handling of the device is expected to +indicate to the PM core if the device may be left in suspend by setting its +:c:member:`power.may_skip_resume` status bit which is checked by the PM core +during the "noirq" phase of the preceding system-wide suspend (or analogous) +transition. The middle layer is then responsible for handling the device as +appropriate in its "noirq" resume callback, which is executed regardless of +whether or not the device is left suspended, but the other resume callbacks +(except for ``->complete``) will be skipped automatically by the PM core if the +device really can be left in suspend. diff --git a/Documentation/power/pci.txt b/Documentation/power/pci.txt index 704cd36079b8..8eaf9ee24d43 100644 --- a/Documentation/power/pci.txt +++ b/Documentation/power/pci.txt @@ -994,6 +994,17 @@ into D0 going forward), but if it is in runtime suspend in pci_pm_thaw_noirq(), the function will set the power.direct_complete flag for it (to make the PM core skip the subsequent "thaw" callbacks for it) and return. +Setting the DPM_FLAG_LEAVE_SUSPENDED flag means that the driver prefers the +device to be left in suspend after system-wide transitions to the working state. +This flag is checked by the PM core, but the PCI bus type informs the PM core +which devices may be left in suspend from its perspective (that happens during +the "noirq" phase of system-wide suspend and analogous transitions) and next it +uses the dev_pm_may_skip_resume() helper to decide whether or not to return from +pci_pm_resume_noirq() early, as the PM core will skip the remaining resume +callbacks for the device during the transition under way and will set its +runtime PM status to "suspended" if dev_pm_may_skip_resume() returns "true" for +it. + 3.2. Device Runtime Power Management ------------------------------------ In addition to providing device power management callbacks PCI device drivers diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c index a4c8ad98560d..c4d0a1c912f0 100644 --- a/drivers/acpi/device_pm.c +++ b/drivers/acpi/device_pm.c @@ -990,7 +990,7 @@ void acpi_subsys_complete(struct device *dev) * the sleep state it is going out of and it has never been resumed till * now, resume it in case the firmware powered it up. */ - if (dev->power.direct_complete && pm_resume_via_firmware()) + if (pm_runtime_suspended(dev) && pm_resume_via_firmware()) pm_request_resume(dev); } EXPORT_SYMBOL_GPL(acpi_subsys_complete); @@ -1039,10 +1039,28 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late); */ int acpi_subsys_suspend_noirq(struct device *dev) { - if (dev_pm_smart_suspend_and_suspended(dev)) + int ret; + + if (dev_pm_smart_suspend_and_suspended(dev)) { + dev->power.may_skip_resume = true; return 0; + } + + ret = pm_generic_suspend_noirq(dev); + if (ret) + return ret; + + /* + * If the target system sleep state is suspend-to-idle, it is sufficient + * to check whether or not the device's wakeup settings are good for + * runtime PM. Otherwise, the pm_resume_via_firmware() check will cause + * acpi_subsys_complete() to take care of fixing up the device's state + * anyway, if need be. + */ + dev->power.may_skip_resume = device_may_wakeup(dev) || + !device_can_wakeup(dev); - return pm_generic_suspend_noirq(dev); + return 0; } EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq); @@ -1052,6 +1070,9 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq); */ int acpi_subsys_resume_noirq(struct device *dev) { + if (dev_pm_may_skip_resume(dev)) + return 0; + /* * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend * during system suspend, so update their runtime PM status to "active" diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c index 08744b572af6..6e8cc5de93fd 100644 --- a/drivers/base/power/main.c +++ b/drivers/base/power/main.c @@ -18,7 +18,6 @@ */ #include <linux/device.h> -#include <linux/kallsyms.h> #include <linux/export.h> #include <linux/mutex.h> #include <linux/pm.h> @@ -541,6 +540,18 @@ void dev_pm_skip_next_resume_phases(struct device *dev) } /** + * dev_pm_may_skip_resume - System-wide device resume optimization check. + * @dev: Target device. + * + * Checks whether or not the device may be left in suspend after a system-wide + * transition to the working state. + */ +bool dev_pm_may_skip_resume(struct device *dev) +{ + return !dev->power.must_resume && pm_transition.event != PM_EVENT_RESTORE; +} + +/** * device_resume_noirq - Execute a "noirq resume" callback for given device. * @dev: Device to handle. * @state: PM transition of the system being carried out. @@ -588,6 +599,18 @@ static int device_resume_noirq(struct device *dev, pm_message_t state, bool asyn error = dpm_run_callback(callback, dev, state, info); dev->power.is_noirq_suspended = false; + if (dev_pm_may_skip_resume(dev)) { + /* + * The device is going to be left in suspend, but it might not + * have been in runtime suspend before the system suspended, so + * its runtime PM status needs to be updated to avoid confusing + * the runtime PM framework when runtime PM is enabled for the + * device again. + */ + pm_runtime_set_suspended(dev); + dev_pm_skip_next_resume_phases(dev); + } + Out: complete_all(&dev->power.completion); TRACE_RESUME(error); @@ -1089,6 +1112,22 @@ static pm_message_t resume_event(pm_message_t sleep_state) return PMSG_ON; } +static void dpm_superior_set_must_resume(struct device *dev) +{ + struct device_link *link; + int idx; + + if (dev->parent) + dev->parent->power.must_resume = true; + + idx = device_links_read_lock(); + + list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) + link->supplier->power.must_resume = true; + + device_links_read_unlock(idx); +} + /** * __device_suspend_noirq - Execute a "noirq suspend" callback for given device. * @dev: Device to handle. @@ -1140,10 +1179,28 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a } error = dpm_run_callback(callback, dev, state, info); - if (!error) - dev->power.is_noirq_suspended = true; - else + if (error) { async_error = error; + goto Complete; + } + + dev->power.is_noirq_suspended = true; + + if (dev_pm_test_driver_flags(dev, DPM_FLAG_LEAVE_SUSPENDED)) { + /* + * The only safe strategy here is to require that if the device + * may not be left in suspend, resume callbacks must be invoked + * for it. + */ + dev->power.must_resume = dev->power.must_resume || + !dev->power.may_skip_resume || + atomic_read(&dev->power.usage_count) > 1; + } else { + dev->power.must_resume = true; + } + + if (dev->power.must_resume) + dpm_superior_set_must_resume(dev); Complete: complete_all(&dev->power.completion); @@ -1435,6 +1492,22 @@ static int legacy_suspend(struct device *dev, pm_message_t state, return error; } +static void dpm_propagate_to_parent(struct device *dev) +{ + struct device *parent = dev->parent; + + if (!parent) + return; + + spin_lock_irq(&parent->power.lock); + + parent->power.direct_complete = false; + if (dev->power.wakeup_path && !parent->power.ignore_children) + parent->power.wakeup_path = true; + + spin_unlock_irq(&parent->power.lock); +} + static void dpm_clear_suppliers_direct_complete(struct device *dev) { struct device_link *link; @@ -1500,6 +1573,9 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) dev->power.direct_complete = false; } + dev->power.may_skip_resume = false; + dev->power.must_resume = false; + dpm_watchdog_set(&wd, dev); device_lock(dev); @@ -1543,19 +1619,8 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) End: if (!error) { - struct device *parent = dev->parent; - dev->power.is_suspended = true; - if (parent) { - spin_lock_irq(&parent->power.lock); - - dev->parent->power.direct_complete = false; - if (dev->power.wakeup_path - && !dev->parent->power.ignore_children) - dev->parent->power.wakeup_path = true; - - spin_unlock_irq(&parent->power.lock); - } + dpm_propagate_to_parent(dev); dpm_clear_suppliers_direct_complete(dev); } @@ -1665,8 +1730,9 @@ static int device_prepare(struct device *dev, pm_message_t state) if (dev->power.syscore) return 0; - WARN_ON(dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) && - !pm_runtime_enabled(dev)); + WARN_ON(!pm_runtime_enabled(dev) && + dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND | + DPM_FLAG_LEAVE_SUSPENDED)); /* * If a device's parent goes into runtime suspend at the wrong time, diff --git a/drivers/base/power/sysfs.c b/drivers/base/power/sysfs.c index e153e28b1857..0f651efc58a1 100644 --- a/drivers/base/power/sysfs.c +++ b/drivers/base/power/sysfs.c @@ -108,16 +108,10 @@ static ssize_t control_show(struct device *dev, struct device_attribute *attr, static ssize_t control_store(struct device * dev, struct device_attribute *attr, const char * buf, size_t n) { - char *cp; - int len = n; - - cp = memchr(buf, '\n', n); - if (cp) - len = cp - buf; device_lock(dev); - if (len == sizeof ctrl_auto - 1 && strncmp(buf, ctrl_auto, len) == 0) + if (sysfs_streq(buf, ctrl_auto)) pm_runtime_allow(dev); - else if (len == sizeof ctrl_on - 1 && strncmp(buf, ctrl_on, len) == 0) + else if (sysfs_streq(buf, ctrl_on)) pm_runtime_forbid(dev); else n = -EINVAL; @@ -125,9 +119,9 @@ static ssize_t control_store(struct device * dev, struct device_attribute *attr, return n; } -static DEVICE_ATTR(control, 0644, control_show, control_store); +static DEVICE_ATTR_RW(control); -static ssize_t rtpm_active_time_show(struct device *dev, +static ssize_t runtime_active_time_show(struct device *dev, struct device_attribute *attr, char *buf) { int ret; @@ -138,9 +132,9 @@ static ssize_t rtpm_active_time_show(struct device *dev, return ret; } -static DEVICE_ATTR(runtime_active_time, 0444, rtpm_active_time_show, NULL); +static DEVICE_ATTR_RO(runtime_active_time); -static ssize_t rtpm_suspended_time_show(struct device *dev, +static ssize_t runtime_suspended_time_show(struct device *dev, struct device_attribute *attr, char *buf) { int ret; @@ -152,9 +146,9 @@ static ssize_t rtpm_suspended_time_show(struct device *dev, return ret; } -static DEVICE_ATTR(runtime_suspended_time, 0444, rtpm_suspended_time_show, NULL); +static DEVICE_ATTR_RO(runtime_suspended_time); -static ssize_t rtpm_status_show(struct device *dev, +static ssize_t runtime_status_show(struct device *dev, struct device_attribute *attr, char *buf) { const char *p; @@ -184,7 +178,7 @@ static ssize_t rtpm_status_show(struct device *dev, return sprintf(buf, p); } -static DEVICE_ATTR(runtime_status, 0444, rtpm_status_show, NULL); +static DEVICE_ATTR_RO(runtime_status); static ssize_t autosuspend_delay_ms_show(struct device *dev, struct device_attribute *attr, char *buf) @@ -211,26 +205,25 @@ static ssize_t autosuspend_delay_ms_store(struct device *dev, return n; } -static DEVICE_ATTR(autosuspend_delay_ms, 0644, autosuspend_delay_ms_show, - autosuspend_delay_ms_store); +static DEVICE_ATTR_RW(autosuspend_delay_ms); -static ssize_t pm_qos_resume_latency_show(struct device *dev, - struct device_attribute *attr, - char *buf) +static ssize_t pm_qos_resume_latency_us_show(struct device *dev, + struct device_attribute *attr, + char *buf) { s32 value = dev_pm_qos_requested_resume_latency(dev); if (value == 0) return sprintf(buf, "n/a\n"); - else if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) + if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) value = 0; return sprintf(buf, "%d\n", value); } -static ssize_t pm_qos_resume_latency_store(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t n) +static ssize_t pm_qos_resume_latency_us_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t n) { s32 value; int ret; @@ -245,7 +238,7 @@ static ssize_t pm_qos_resume_latency_store(struct device *dev, if (value == 0) value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; - } else if (!strcmp(buf, "n/a") || !strcmp(buf, "n/a\n")) { + } else if (sysfs_streq(buf, "n/a")) { value = 0; } else { return -EINVAL; @@ -256,26 +249,25 @@ static ssize_t pm_qos_resume_latency_store(struct device *dev, return ret < 0 ? ret : n; } -static DEVICE_ATTR(pm_qos_resume_latency_us, 0644, - pm_qos_resume_latency_show, pm_qos_resume_latency_store); +static DEVICE_ATTR_RW(pm_qos_resume_latency_us); -static ssize_t pm_qos_latency_tolerance_show(struct device *dev, - struct device_attribute *attr, - char *buf) +static ssize_t pm_qos_latency_tolerance_us_show(struct device *dev, + struct device_attribute *attr, + char *buf) { s32 value = dev_pm_qos_get_user_latency_tolerance(dev); if (value < 0) return sprintf(buf, "auto\n"); - else if (value == PM_QOS_LATENCY_ANY) + if (value == PM_QOS_LATENCY_ANY) return sprintf(buf, "any\n"); return sprintf(buf, "%d\n", value); } -static ssize_t pm_qos_latency_tolerance_store(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t n) +static ssize_t pm_qos_latency_tolerance_us_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t n) { s32 value; int ret; @@ -285,9 +277,9 @@ static ssize_t pm_qos_latency_tolerance_store(struct device *dev, if (value < 0) return -EINVAL; } else { - if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n")) + if (sysfs_streq(buf, "auto")) value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; - else if (!strcmp(buf, "any") || !strcmp(buf, "any\n")) + else if (sysfs_streq(buf, "any")) value = PM_QOS_LATENCY_ANY; else return -EINVAL; @@ -296,8 +288,7 @@ static ssize_t pm_qos_latency_tolerance_store(struct device *dev, return ret < 0 ? ret : n; } -static DEVICE_ATTR(pm_qos_latency_tolerance_us, 0644, - pm_qos_latency_tolerance_show, pm_qos_latency_tolerance_store); +static DEVICE_ATTR_RW(pm_qos_latency_tolerance_us); static ssize_t pm_qos_no_power_off_show(struct device *dev, struct device_attribute *attr, @@ -323,49 +314,39 @@ static ssize_t pm_qos_no_power_off_store(struct device *dev, return ret < 0 ? ret : n; } -static DEVICE_ATTR(pm_qos_no_power_off, 0644, - pm_qos_no_power_off_show, pm_qos_no_power_off_store); +static DEVICE_ATTR_RW(pm_qos_no_power_off); #ifdef CONFIG_PM_SLEEP static const char _enabled[] = "enabled"; static const char _disabled[] = "disabled"; -static ssize_t -wake_show(struct device * dev, struct device_attribute *attr, char * buf) +static ssize_t wakeup_show(struct device *dev, struct device_attribute *attr, + char *buf) { return sprintf(buf, "%s\n", device_can_wakeup(dev) ? (device_may_wakeup(dev) ? _enabled : _disabled) : ""); } -static ssize_t -wake_store(struct device * dev, struct device_attribute *attr, - const char * buf, size_t n) +static ssize_t wakeup_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t n) { - char *cp; - int len = n; - if (!device_can_wakeup(dev)) return -EINVAL; - cp = memchr(buf, '\n', n); - if (cp) - len = cp - buf; - if (len == sizeof _enabled - 1 - && strncmp(buf, _enabled, sizeof _enabled - 1) == 0) + if (sysfs_streq(buf, _enabled)) device_set_wakeup_enable(dev, 1); - else if (len == sizeof _disabled - 1 - && strncmp(buf, _disabled, sizeof _disabled - 1) == 0) + else if (sysfs_streq(buf, _disabled)) device_set_wakeup_enable(dev, 0); else return -EINVAL; return n; } -static DEVICE_ATTR(wakeup, 0644, wake_show, wake_store); +static DEVICE_ATTR_RW(wakeup); static ssize_t wakeup_count_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, char *buf) { unsigned long count = 0; bool enabled = false; @@ -379,10 +360,11 @@ static ssize_t wakeup_count_show(struct device *dev, return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); } -static DEVICE_ATTR(wakeup_count, 0444, wakeup_count_show, NULL); +static DEVICE_ATTR_RO(wakeup_count); static ssize_t wakeup_active_count_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, + char *buf) { unsigned long count = 0; bool enabled = false; @@ -396,11 +378,11 @@ static ssize_t wakeup_active_count_show(struct device *dev, return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); } -static DEVICE_ATTR(wakeup_active_count, 0444, wakeup_active_count_show, NULL); +static DEVICE_ATTR_RO(wakeup_active_count); static ssize_t wakeup_abort_count_show(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { unsigned long count = 0; bool enabled = false; @@ -414,7 +396,7 @@ static ssize_t wakeup_abort_count_show(struct device *dev, return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); } -static DEVICE_ATTR(wakeup_abort_count, 0444, wakeup_abort_count_show, NULL); +static DEVICE_ATTR_RO(wakeup_abort_count); static ssize_t wakeup_expire_count_show(struct device *dev, struct device_attribute *attr, @@ -432,10 +414,10 @@ static ssize_t wakeup_expire_count_show(struct device *dev, return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); } -static DEVICE_ATTR(wakeup_expire_count, 0444, wakeup_expire_count_show, NULL); +static DEVICE_ATTR_RO(wakeup_expire_count); static ssize_t wakeup_active_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, char *buf) { unsigned int active = 0; bool enabled = false; @@ -449,10 +431,11 @@ static ssize_t wakeup_active_show(struct device *dev, return enabled ? sprintf(buf, "%u\n", active) : sprintf(buf, "\n"); } -static DEVICE_ATTR(wakeup_active, 0444, wakeup_active_show, NULL); +static DEVICE_ATTR_RO(wakeup_active); -static ssize_t wakeup_total_time_show(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t wakeup_total_time_ms_show(struct device *dev, + struct device_attribute *attr, + char *buf) { s64 msec = 0; bool enabled = false; @@ -466,10 +449,10 @@ static ssize_t wakeup_total_time_show(struct device *dev, return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); } -static DEVICE_ATTR(wakeup_total_time_ms, 0444, wakeup_total_time_show, NULL); +static DEVICE_ATTR_RO(wakeup_total_time_ms); -static ssize_t wakeup_max_time_show(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t wakeup_max_time_ms_show(struct device *dev, + struct device_attribute *attr, char *buf) { s64 msec = 0; bool enabled = false; @@ -483,10 +466,11 @@ static ssize_t wakeup_max_time_show(struct device *dev, return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); } -static DEVICE_ATTR(wakeup_max_time_ms, 0444, wakeup_max_time_show, NULL); +static DEVICE_ATTR_RO(wakeup_max_time_ms); -static ssize_t wakeup_last_time_show(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t wakeup_last_time_ms_show(struct device *dev, + struct device_attribute *attr, + char *buf) { s64 msec = 0; bool enabled = false; @@ -500,12 +484,12 @@ static ssize_t wakeup_last_time_show(struct device *dev, return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); } -static DEVICE_ATTR(wakeup_last_time_ms, 0444, wakeup_last_time_show, NULL); +static DEVICE_ATTR_RO(wakeup_last_time_ms); #ifdef CONFIG_PM_AUTOSLEEP -static ssize_t wakeup_prevent_sleep_time_show(struct device *dev, - struct device_attribute *attr, - char *buf) +static ssize_t wakeup_prevent_sleep_time_ms_show(struct device *dev, + struct device_attribute *attr, + char *buf) { s64 msec = 0; bool enabled = false; @@ -519,40 +503,39 @@ static ssize_t wakeup_prevent_sleep_time_show(struct device *dev, return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); } -static DEVICE_ATTR(wakeup_prevent_sleep_time_ms, 0444, - wakeup_prevent_sleep_time_show, NULL); +static DEVICE_ATTR_RO(wakeup_prevent_sleep_time_ms); #endif /* CONFIG_PM_AUTOSLEEP */ #endif /* CONFIG_PM_SLEEP */ #ifdef CONFIG_PM_ADVANCED_DEBUG -static ssize_t rtpm_usagecount_show(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t runtime_usage_show(struct device *dev, + struct device_attribute *attr, char *buf) { return sprintf(buf, "%d\n", atomic_read(&dev->power.usage_count)); } +static DEVICE_ATTR_RO(runtime_usage); -static ssize_t rtpm_children_show(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t runtime_active_kids_show(struct device *dev, + struct device_attribute *attr, + char *buf) { return sprintf(buf, "%d\n", dev->power.ignore_children ? 0 : atomic_read(&dev->power.child_count)); } +static DEVICE_ATTR_RO(runtime_active_kids); -static ssize_t rtpm_enabled_show(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t runtime_enabled_show(struct device *dev, + struct device_attribute *attr, char *buf) { - if ((dev->power.disable_depth) && (dev->power.runtime_auto == false)) + if (dev->power.disable_depth && (dev->power.runtime_auto == false)) return sprintf(buf, "disabled & forbidden\n"); - else if (dev->power.disable_depth) + if (dev->power.disable_depth) return sprintf(buf, "disabled\n"); - else if (dev->power.runtime_auto == false) + if (dev->power.runtime_auto == false) return sprintf(buf, "forbidden\n"); return sprintf(buf, "enabled\n"); } - -static DEVICE_ATTR(runtime_usage, 0444, rtpm_usagecount_show, NULL); -static DEVICE_ATTR(runtime_active_kids, 0444, rtpm_children_show, NULL); -static DEVICE_ATTR(runtime_enabled, 0444, rtpm_enabled_show, NULL); +static DEVICE_ATTR_RO(runtime_enabled); #ifdef CONFIG_PM_SLEEP static ssize_t async_show(struct device *dev, struct device_attribute *attr, @@ -566,23 +549,16 @@ static ssize_t async_show(struct device *dev, struct device_attribute *attr, static ssize_t async_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t n) { - char *cp; - int len = n; - - cp = memchr(buf, '\n', n); - if (cp) - len = cp - buf; - if (len == sizeof _enabled - 1 && strncmp(buf, _enabled, len) == 0) + if (sysfs_streq(buf, _enabled)) device_enable_async_suspend(dev); - else if (len == sizeof _disabled - 1 && - strncmp(buf, _disabled, len) == 0) + else if (sysfs_streq(buf, _disabled)) device_disable_async_suspend(dev); else return -EINVAL; return n; } -static DEVICE_ATTR(async, 0644, async_show, async_store); +static DEVICE_ATTR_RW(async); #endif /* CONFIG_PM_SLEEP */ #endif /* CONFIG_PM_ADVANCED_DEBUG */ diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index 945099d49f8f..9e53e51b91f3 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c @@ -699,7 +699,7 @@ static void pci_pm_complete(struct device *dev) pm_generic_complete(dev); /* Resume device if platform firmware has put it in reset-power-on */ - if (dev->power.direct_complete && pm_resume_via_firmware()) { + if (pm_runtime_suspended(dev) && pm_resume_via_firmware()) { pci_power_t pre_sleep_state = pci_dev->current_state; pci_update_current_state(pci_dev, pci_dev->current_state); @@ -783,8 +783,10 @@ static int pci_pm_suspend_noirq(struct device *dev) struct pci_dev *pci_dev = to_pci_dev(dev); const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - if (dev_pm_smart_suspend_and_suspended(dev)) + if (dev_pm_smart_suspend_and_suspended(dev)) { + dev->power.may_skip_resume = true; return 0; + } if (pci_has_legacy_pm_support(pci_dev)) return pci_legacy_suspend_late(dev, PMSG_SUSPEND); @@ -838,6 +840,16 @@ static int pci_pm_suspend_noirq(struct device *dev) Fixup: pci_fixup_device(pci_fixup_suspend_late, pci_dev); + /* + * If the target system sleep state is suspend-to-idle, it is sufficient + * to check whether or not the device's wakeup settings are good for + * runtime PM. Otherwise, the pm_resume_via_firmware() check will cause + * pci_pm_complete() to take care of fixing up the device's state + * anyway, if need be. + */ + dev->power.may_skip_resume = device_may_wakeup(dev) || + !device_can_wakeup(dev); + return 0; } @@ -847,6 +859,9 @@ static int pci_pm_resume_noirq(struct device *dev) struct device_driver *drv = dev->driver; int error = 0; + if (dev_pm_may_skip_resume(dev)) + return 0; + /* * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend * during system suspend, so update their runtime PM status to "active" diff --git a/include/linux/pm.h b/include/linux/pm.h index 492ed473ba7e..e723b78d8357 100644 --- a/include/linux/pm.h +++ b/include/linux/pm.h @@ -556,9 +556,10 @@ struct pm_subsys_data { * These flags can be set by device drivers at the probe time. They need not be * cleared by the drivers as the driver core will take care of that. * - * NEVER_SKIP: Do not skip system suspend/resume callbacks for the device. + * NEVER_SKIP: Do not skip all system suspend/resume callbacks for the device. * SMART_PREPARE: Check the return value of the driver's ->prepare callback. * SMART_SUSPEND: No need to resume the device from runtime suspend. + * LEAVE_SUSPENDED: Avoid resuming the device during system resume if possible. * * Setting SMART_PREPARE instructs bus types and PM domains which may want * system suspend/resume callbacks to be skipped for the device to return 0 from @@ -572,10 +573,14 @@ struct pm_subsys_data { * necessary from the driver's perspective. It also may cause them to skip * invocations of the ->suspend_late and ->suspend_noirq callbacks provided by * the driver if they decide to leave the device in runtime suspend. + * + * Setting LEAVE_SUSPENDED informs the PM core and middle-layer code that the + * driver prefers the device to be left in suspend after system resume. */ -#define DPM_FLAG_NEVER_SKIP BIT(0) -#define DPM_FLAG_SMART_PREPARE BIT(1) -#define DPM_FLAG_SMART_SUSPEND BIT(2) +#define DPM_FLAG_NEVER_SKIP BIT(0) +#define DPM_FLAG_SMART_PREPARE BIT(1) +#define DPM_FLAG_SMART_SUSPEND BIT(2) +#define DPM_FLAG_LEAVE_SUSPENDED BIT(3) struct dev_pm_info { pm_message_t power_state; @@ -597,6 +602,8 @@ struct dev_pm_info { bool wakeup_path:1; bool syscore:1; bool no_pm_callbacks:1; /* Owned by the PM core */ + unsigned int must_resume:1; /* Owned by the PM core */ + unsigned int may_skip_resume:1; /* Set by subsystems */ #else unsigned int should_wakeup:1; #endif @@ -766,6 +773,7 @@ extern int pm_generic_poweroff(struct device *dev); extern void pm_generic_complete(struct device *dev); extern void dev_pm_skip_next_resume_phases(struct device *dev); +extern bool dev_pm_may_skip_resume(struct device *dev); extern bool dev_pm_smart_suspend_and_suspended(struct device *dev); #else /* !CONFIG_PM_SLEEP */ |