diff options
author | Andrea Righi <andrea.righi@linux.dev> | 2024-09-21 21:39:21 +0200 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2024-09-23 06:53:02 -1000 |
commit | 431844b65f4c1b988ccd886f2ed29c138f7bb262 (patch) | |
tree | 3d84bd26e2e6c1be937bc39c65a2fbbf947abb1b /kernel/sched | |
parent | 62d3726d4cd66f3e48dfe0f0401e0d74e58c2170 (diff) |
sched_ext: Provide a sysfs enable_seq counter
As discussed during the distro-centric session within the sched_ext
Microconference at LPC 2024, introduce a sequence counter that is
incremented every time a BPF scheduler is loaded.
This feature can help distributions in diagnosing potential performance
regressions by identifying systems where users are running (or have ran)
custom BPF schedulers.
Example:
arighi@virtme-ng~> cat /sys/kernel/sched_ext/enable_seq
0
arighi@virtme-ng~> sudo scx_simple
local=1 global=0
^CEXIT: unregistered from user space
arighi@virtme-ng~> cat /sys/kernel/sched_ext/enable_seq
1
In this way user-space tools (such as Ubuntu's apport and similar) are
able to gather and include this information in bug reports.
Cc: Giovanni Gherdovich <giovanni.gherdovich@suse.com>
Cc: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Cc: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Cc: Phil Auld <pauld@redhat.com>
Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/ext.c | 17 |
1 files changed, 17 insertions, 0 deletions
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 7c320dcd72d5..c09e3dc38c34 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -875,6 +875,13 @@ static atomic_long_t scx_nr_rejected = ATOMIC_LONG_INIT(0); static atomic_long_t scx_hotplug_seq = ATOMIC_LONG_INIT(0); /* + * A monotically increasing sequence number that is incremented every time a + * scheduler is enabled. This can be used by to check if any custom sched_ext + * scheduler has ever been used in the system. + */ +static atomic_long_t scx_enable_seq = ATOMIC_LONG_INIT(0); + +/* * The maximum amount of time in jiffies that a task may be runnable without * being scheduled on a CPU. If this timeout is exceeded, it will trigger * scx_ops_error(). @@ -4154,11 +4161,19 @@ static ssize_t scx_attr_hotplug_seq_show(struct kobject *kobj, } SCX_ATTR(hotplug_seq); +static ssize_t scx_attr_enable_seq_show(struct kobject *kobj, + struct kobj_attribute *ka, char *buf) +{ + return sysfs_emit(buf, "%ld\n", atomic_long_read(&scx_enable_seq)); +} +SCX_ATTR(enable_seq); + static struct attribute *scx_global_attrs[] = { &scx_attr_state.attr, &scx_attr_switch_all.attr, &scx_attr_nr_rejected.attr, &scx_attr_hotplug_seq.attr, + &scx_attr_enable_seq.attr, NULL, }; @@ -5177,6 +5192,8 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link) kobject_uevent(scx_root_kobj, KOBJ_ADD); mutex_unlock(&scx_ops_enable_mutex); + atomic_long_inc(&scx_enable_seq); + return 0; err_del: |