summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-11-11libbpf: Introduce errstr() for stringifying errnoMykyta Yatsenko
Add function errstr(int err) that allows converting numeric error codes into string representations. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241111212919.368971-2-mykyta.yatsenko5@gmail.com
2024-11-11bpf: Replace the document for PTR_TO_BTF_ID_OR_NULLMenglong Dong
Commit c25b2ae13603 ("bpf: Replace PTR_TO_XXX_OR_NULL with PTR_TO_XXX | PTR_MAYBE_NULL") moved the fields around and misplaced the documentation for "PTR_TO_BTF_ID_OR_NULL". So, let's replace it in the proper place. Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241111124911.1436911-1-dongml2@chinatelecom.cn
2024-11-11tools/bpf: Fix the wrong format specifier in bpf_jit_disasmLuo Yifan
There is a static checker warning that the %d in format string is mismatched with the corresponding argument type, which could result in incorrect printed data. This patch fixes it. Signed-off-by: Luo Yifan <luoyifan@cmss.chinamobile.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241111021004.272293-1-luoyifan@cmss.chinamobile.com
2024-11-11kbuild,bpf: Pass make jobs' value to paholeFlorian Schmaus
Pass the value of make's -j/--jobs argument to pahole, to avoid out of memory errors and make pahole respect the "jobs" value of make. On systems with little memory but many cores, invoking pahole using -j without argument potentially creates too many pahole instances, causing an out-of-memory situation. Instead, we should pass make's "jobs" value as an argument to pahole's -j, which is likely configured to be (much) lower than the actual core count on such systems. If make was invoked without -j, either via cmdline or MAKEFLAGS, then JOBS will be simply empty, resulting in the existing behavior, as expected. Signed-off-by: Florian Schmaus <flo@geekplace.eu> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Link: https://lore.kernel.org/bpf/20241102100452.793970-1-flo@geekplace.eu
2024-11-11Merge branch 'refactor-lock-management'Alexei Starovoitov
Kumar Kartikeya Dwivedi says: ==================== Refactor lock management This set refactors lock management in the verifier in preparation for spin locks that can be acquired multiple times. In addition to this, unnecessary code special case reference leak logic for callbacks is also dropped, that is no longer necessary. See patches for details. Changelog: ---------- v5 -> v6 v5: https://lore.kernel.org/bpf/20241109225243.2306756-1-memxor@gmail.com * Move active_locks mutation to {acquire,release}_lock_state (Alexei) v4 -> v5 v4: https://lore.kernel.org/bpf/20241109074347.1434011-1-memxor@gmail.com * Make active_locks part of bpf_func_state (Alexei) * Remove unneeded in_callback_fn logic for references v3 -> v4 v3: https://lore.kernel.org/bpf/20241104151716.2079893-1-memxor@gmail.com * Address comments from Alexei * Drop struct bpf_active_lock definition * Name enum type, expand definition to multiple lines * s/REF_TYPE_BPF_LOCK/REF_TYPE_LOCK/g * Change active_lock type to int * Fix type of 'type' in acquire_lock_state * Filter by taking type explicitly in find_lock_state * WARN for default case in refsafe switch statement v2 -> v3 v2: https://lore.kernel.org/bpf/20241103212252.547071-1-memxor@gmail.com * Rebase on bpf-next to resolve merge conflict v1 -> v2 v1: https://lore.kernel.org/bpf/20241103205856.345580-1-memxor@gmail.com * Fix refsafe state comparison to check callback_ref and ptr separately. ==================== Link: https://lore.kernel.org/r/20241109231430.2475236-1-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-11bpf: Drop special callback reference handlingKumar Kartikeya Dwivedi
Logic to prevent callbacks from acquiring new references for the program (i.e. leaving acquired references), and releasing caller references (i.e. those acquired in parent frames) was introduced in commit 9d9d00ac29d0 ("bpf: Fix reference state management for synchronous callbacks"). This was necessary because back then, the verifier simulated each callback once (that could potentially be executed N times, where N can be zero). This meant that callbacks that left lingering resources or cleared caller resources could do it more than once, operating on undefined state or leaking memory. With the fixes to callback verification in commit ab5cfac139ab ("bpf: verify callbacks as if they are called unknown number of times"), all of this extra logic is no longer necessary. Hence, drop it as part of this commit. Cc: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241109231430.2475236-3-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-11bpf: Refactor active lock managementKumar Kartikeya Dwivedi
When bpf_spin_lock was introduced originally, there was deliberation on whether to use an array of lock IDs, but since bpf_spin_lock is limited to holding a single lock at any given time, we've been using a single ID to identify the held lock. In preparation for introducing spin locks that can be taken multiple times, introduce support for acquiring multiple lock IDs. For this purpose, reuse the acquired_refs array and store both lock and pointer references. We tag the entry with REF_TYPE_PTR or REF_TYPE_LOCK to disambiguate and find the relevant entry. The ptr field is used to track the map_ptr or btf (for bpf_obj_new allocations) to ensure locks can be matched with protected fields within the same "allocation", i.e. bpf_obj_new object or map value. The struct active_lock is changed to an int as the state is part of the acquired_refs array, and we only need active_lock as a cheap way of detecting lock presence. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241109231430.2475236-2-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-11selftests/bpf: skip the timer_lockup test for single-CPU nodesViktor Malik
The timer_lockup test needs 2 CPUs to work, on single-CPU nodes it fails to set thread affinity to CPU 1 since it doesn't exist: # ./test_progs -t timer_lockup test_timer_lockup:PASS:timer_lockup__open_and_load 0 nsec test_timer_lockup:PASS:pthread_create thread1 0 nsec test_timer_lockup:PASS:pthread_create thread2 0 nsec timer_lockup_thread:PASS:cpu affinity 0 nsec timer_lockup_thread:FAIL:cpu affinity unexpected error: 22 (errno 0) test_timer_lockup:PASS: 0 nsec #406 timer_lockup:FAIL Skip the test if only 1 CPU is available. Signed-off-by: Viktor Malik <vmalik@redhat.com> Fixes: 50bd5a0c658d1 ("selftests/bpf: Add timer lockup selftest") Tested-by: Philo Lu <lulie@linux.alibaba.com> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241107115231.75200-1-vmalik@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-11Merge branch 'fix-lockdep-warning-for-htab-of-map'Alexei Starovoitov
Hou Tao says: ==================== The patch set fixes a lockdep warning for htab of map. The warning is found when running test_maps. The warning occurs when htab_put_fd_value() attempts to acquire map_idr_lock to free the map id of the inner map while already holding the bucket lock (raw_spinlock_t). The fix moves the invocation of free_htab_elem() after htab_unlock_bucket() and adds a test case to verify the solution. ==================== Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-11selftests/bpf: Test the update operations for htab of mapsHou Tao
Add test cases to verify the following four update operations on htab of maps don't trigger lockdep warning: (1) add then delete (2) add, overwrite, then delete (3) add, then lookup_and_delete (4) add two elements, then lookup_and_delete_batch Test cases are added for pre-allocated and non-preallocated htab of maps respectively. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20241106063542.357743-4-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-11selftests/bpf: Move ENOTSUPP from bpf_util.hHou Tao
Moving the definition of ENOTSUPP into bpf_util.h to remove the duplicated definitions in multiple files. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20241106063542.357743-3-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-11bpf: Call free_htab_elem() after htab_unlock_bucket()Hou Tao
For htab of maps, when the map is removed from the htab, it may hold the last reference of the map. bpf_map_fd_put_ptr() will invoke bpf_map_free_id() to free the id of the removed map element. However, bpf_map_fd_put_ptr() is invoked while holding a bucket lock (raw_spin_lock_t), and bpf_map_free_id() attempts to acquire map_idr_lock (spinlock_t), triggering the following lockdep warning: ============================= [ BUG: Invalid wait context ] 6.11.0-rc4+ #49 Not tainted ----------------------------- test_maps/4881 is trying to lock: ffffffff84884578 (map_idr_lock){+...}-{3:3}, at: bpf_map_free_id.part.0+0x21/0x70 other info that might help us debug this: context-{5:5} 2 locks held by test_maps/4881: #0: ffffffff846caf60 (rcu_read_lock){....}-{1:3}, at: bpf_fd_htab_map_update_elem+0xf9/0x270 #1: ffff888149ced148 (&htab->lockdep_key#2){....}-{2:2}, at: htab_map_update_elem+0x178/0xa80 stack backtrace: CPU: 0 UID: 0 PID: 4881 Comm: test_maps Not tainted 6.11.0-rc4+ #49 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), ... Call Trace: <TASK> dump_stack_lvl+0x6e/0xb0 dump_stack+0x10/0x20 __lock_acquire+0x73e/0x36c0 lock_acquire+0x182/0x450 _raw_spin_lock_irqsave+0x43/0x70 bpf_map_free_id.part.0+0x21/0x70 bpf_map_put+0xcf/0x110 bpf_map_fd_put_ptr+0x9a/0xb0 free_htab_elem+0x69/0xe0 htab_map_update_elem+0x50f/0xa80 bpf_fd_htab_map_update_elem+0x131/0x270 htab_map_update_elem+0x50f/0xa80 bpf_fd_htab_map_update_elem+0x131/0x270 bpf_map_update_value+0x266/0x380 __sys_bpf+0x21bb/0x36b0 __x64_sys_bpf+0x45/0x60 x64_sys_call+0x1b2a/0x20d0 do_syscall_64+0x5d/0x100 entry_SYSCALL_64_after_hwframe+0x76/0x7e One way to fix the lockdep warning is using raw_spinlock_t for map_idr_lock as well. However, bpf_map_alloc_id() invokes idr_alloc_cyclic() after acquiring map_idr_lock, it will trigger a similar lockdep warning because the slab's lock (s->cpu_slab->lock) is still a spinlock. Instead of changing map_idr_lock's type, fix the issue by invoking htab_put_fd_value() after htab_unlock_bucket(). However, only deferring the invocation of htab_put_fd_value() is not enough, because the old map pointers in htab of maps can not be saved during batched deletion. Therefore, also defer the invocation of free_htab_elem(), so these to-be-freed elements could be linked together similar to lru map. There are four callers for ->map_fd_put_ptr: (1) alloc_htab_elem() (through htab_put_fd_value()) It invokes ->map_fd_put_ptr() under a raw_spinlock_t. The invocation of htab_put_fd_value() can not simply move after htab_unlock_bucket(), because the old element has already been stashed in htab->extra_elems. It may be reused immediately after htab_unlock_bucket() and the invocation of htab_put_fd_value() after htab_unlock_bucket() may release the newly-added element incorrectly. Therefore, saving the map pointer of the old element for htab of maps before unlocking the bucket and releasing the map_ptr after unlock. Beside the map pointer in the old element, should do the same thing for the special fields in the old element as well. (2) free_htab_elem() (through htab_put_fd_value()) Its caller includes __htab_map_lookup_and_delete_elem(), htab_map_delete_elem() and __htab_map_lookup_and_delete_batch(). For htab_map_delete_elem(), simply invoke free_htab_elem() after htab_unlock_bucket(). For __htab_map_lookup_and_delete_batch(), just like lru map, linking the to-be-freed element into node_to_free list and invoking free_htab_elem() for these element after unlock. It is safe to reuse batch_flink as the link for node_to_free, because these elements have been removed from the hash llist. Because htab of maps doesn't support lookup_and_delete operation, __htab_map_lookup_and_delete_elem() doesn't have the problem, so kept it as is. (3) fd_htab_map_free() It invokes ->map_fd_put_ptr without raw_spinlock_t. (4) bpf_fd_htab_map_update_elem() It invokes ->map_fd_put_ptr without raw_spinlock_t. After moving free_htab_elem() outside htab bucket lock scope, using pcpu_freelist_push() instead of __pcpu_freelist_push() to disable the irq before freeing elements, and protecting the invocations of bpf_mem_cache_free() with migrate_{disable|enable} pair. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20241106063542.357743-2-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-11Merge branch 'bpf-add-uprobe-session-support'Andrii Nakryiko
Jiri Olsa says: ==================== bpf: Add uprobe session support hi, this patchset is adding support for session uprobe attachment and using it through bpf link for bpf programs. The session means that the uprobe consumer is executed on entry and return of probed function with additional control: - entry callback can control execution of the return callback - entry and return callbacks can share data/cookie Uprobe changes (on top of perf/core [1] are posted in here [2]. This patchset is based on bpf-next/master and will be merged once we pull [2] in bpf-next/master. v9 changes: - rebased on bpf-next/master with perf/core tag merged (thanks Peter!) thanks, jirka [1] git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git perf/core [2] https://lore.kernel.org/bpf/20241018202252.693462-1-jolsa@kernel.org/T/#ma43c549c4bf684ca1b17fa638aa5e7cbb46893e9 --- ==================== Link: https://lore.kernel.org/r/20241108134544.480660-1-jolsa@kernel.org Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-11selftests/bpf: Add threads to consumer testJiri Olsa
With recent uprobe fix [1] the sync time after unregistering uprobe is much longer and prolongs the consumer test which creates and destroys hundreds of uprobes. This change adds 16 threads (which fits the test logic) and speeds up the test. Before the change: # perf stat --null ./test_progs -t uprobe_multi_test/consumers #421/9 uprobe_multi_test/consumers:OK #421 uprobe_multi_test:OK Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED Performance counter stats for './test_progs -t uprobe_multi_test/consumers': 28.818778973 seconds time elapsed 0.745518000 seconds user 0.919186000 seconds sys After the change: # perf stat --null ./test_progs -t uprobe_multi_test/consumers 2>&1 #421/9 uprobe_multi_test/consumers:OK #421 uprobe_multi_test:OK Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED Performance counter stats for './test_progs -t uprobe_multi_test/consumers': 3.504790814 seconds time elapsed 0.012141000 seconds user 0.751760000 seconds sys [1] commit 87195a1ee332 ("uprobes: switch to RCU Tasks Trace flavor for better performance") Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-14-jolsa@kernel.org
2024-11-11selftests/bpf: Add uprobe sessions to consumer testJiri Olsa
Adding uprobe session consumers to the consumer test, so we get the session into the test mix. In addition scaling down the test to have just 1 uprobe and 1 uretprobe, otherwise the test time grows and is unsuitable for CI even with threads. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-13-jolsa@kernel.org
2024-11-11selftests/bpf: Add uprobe session single consumer testJiri Olsa
Testing that the session ret_handler bypass works on single uprobe with multiple consumers, each with different session ignore return value. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-12-jolsa@kernel.org
2024-11-11selftests/bpf: Add kprobe session verifier test for return valueJiri Olsa
Making sure kprobe.session program can return only [0,1] values. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-11-jolsa@kernel.org
2024-11-11selftests/bpf: Add uprobe session verifier test for return valueJiri Olsa
Making sure uprobe.session program can return only [0,1] values. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-10-jolsa@kernel.org
2024-11-11selftests/bpf: Add uprobe session recursive testJiri Olsa
Adding uprobe session test that verifies the cookie value is stored properly when single uprobe-ed function is executed recursively. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-9-jolsa@kernel.org
2024-11-11selftests/bpf: Add uprobe session cookie testJiri Olsa
Adding uprobe session test that verifies the cookie value get properly propagated from entry to return program. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-8-jolsa@kernel.org
2024-11-11selftests/bpf: Add uprobe session testJiri Olsa
Adding uprobe session test and testing that the entry program return value controls execution of the return probe program. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-7-jolsa@kernel.org
2024-11-11libbpf: Add support for uprobe multi session attachJiri Olsa
Adding support to attach program in uprobe session mode with bpf_program__attach_uprobe_multi function. Adding session bool to bpf_uprobe_multi_opts struct that allows to load and attach the bpf program via uprobe session. the attachment to create uprobe multi session. Also adding new program loader section that allows: SEC("uprobe.session/bpf_fentry_test*") and loads/attaches uprobe program as uprobe session. Adding sleepable hook (uprobe.session.s) as well. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-6-jolsa@kernel.org
2024-11-11bpf: Add support for uprobe multi session contextJiri Olsa
Placing bpf_session_run_ctx layer in between bpf_run_ctx and bpf_uprobe_multi_run_ctx, so the session data can be retrieved from uprobe_multi link. Plus granting session kfuncs access to uprobe session programs. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-5-jolsa@kernel.org
2024-11-11bpf: Add support for uprobe multi session attachJiri Olsa
Adding support to attach BPF program for entry and return probe of the same function. This is common use case which at the moment requires to create two uprobe multi links. Adding new BPF_TRACE_UPROBE_SESSION attach type that instructs kernel to attach single link program to both entry and exit probe. It's possible to control execution of the BPF program on return probe simply by returning zero or non zero from the entry BPF program execution to execute or not the BPF program on return probe respectively. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-4-jolsa@kernel.org
2024-11-11bpf: Force uprobe bpf program to always return 0Jiri Olsa
As suggested by Andrii make uprobe multi bpf programs to always return 0, so they can't force uprobe removal. Keeping the int return type for uprobe_prog_run, because it will be used in following session changes. Fixes: 89ae89f53d20 ("bpf: Add multi uprobe link") Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-3-jolsa@kernel.org
2024-11-11bpf: Allow return values 0 and 1 for kprobe sessionJiri Olsa
The kprobe session program can return only 0 or 1, instruct verifier to check for that. Fixes: 535a3692ba72 ("bpf: Add support for kprobe session attach") Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-2-jolsa@kernel.org
2024-11-11selftests/bpf: Fix uprobe consumer test (again)Jiri Olsa
The new uprobe changes bring some new behaviour that we need to reflect in the consumer test. Now pending uprobe instance in the kernel can survive longer and thus might call uretprobe consumer callbacks in some situations in which, previously, such callback would be omitted. We now need to take that into account in uprobe-multi consumer tests. The idea being that uretprobe under test either stayed from before to after (uret_stays + test_bit) or uretprobe instance survived and we have uretprobe active in after (uret_survives + test_bit). uret_survives just states that uretprobe survives if there are *any* uretprobes both before and after (overlapping or not, doesn't matter) and uprobe was attached before. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241107094337.3848210-1-jolsa@kernel.org
2024-11-11bpf: Remove trailing whitespace in verifier.rstAbhinav Saxena
Remove trailing whitespace in Documentation/bpf/verifier.rst. Signed-off-by: Abhinav Saxena <xandfury@gmail.com> Link: https://lore.kernel.org/r/20241107063708.106340-2-xandfury@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-11selftests/bpf: Allow building with extra flagsViktor Malik
In order to specify extra compilation or linking flags to BPF selftests, it is possible to set EXTRA_CFLAGS and EXTRA_LDFLAGS from the command line. The problem is that they are not propagated to sub-make calls (runqslower, bpftool, libbpf) and in the better case are not applied, in the worse case cause the entire build fail. Propagate EXTRA_CFLAGS and EXTRA_LDFLAGS to the sub-makes. This, for instance, allows to build selftests as PIE with $ make EXTRA_CFLAGS='-fPIE' EXTRA_LDFLAGS='-pie' Without this change, the command would fail because libbpf.a would not be built with -fPIE and other PIE binaries would not link against it. The only problem is that we have to explicitly provide empty EXTRA_CFLAGS='' and EXTRA_LDFLAGS='' to the builds of kernel modules as we don't want to build modules with flags used for userspace (the above example would fail as kernel doesn't support PIE). Signed-off-by: Viktor Malik <vmalik@redhat.com> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-06Merge tag 'perf-core-for-bpf-next' from tip treeAndrii Nakryiko
Stable tag for bpf-next's uprobe work. Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-11-04Merge branch 'handle-possible-null-trusted-raw_tp-arguments'Alexei Starovoitov
Kumar Kartikeya Dwivedi says: ==================== Handle possible NULL trusted raw_tp arguments More context is available in [0], but the TLDR; is that the verifier incorrectly assumes that any raw tracepoint argument will always be non-NULL. This means that even when users correctly check possible NULL arguments, the verifier can remove the NULL check due to incorrect knowledge of the NULL-ness of the pointer. Secondly, kernel helpers or kfuncs taking these trusted tracepoint arguments incorrectly assume that all arguments will always be valid non-NULL. In this set, we mark raw_tp arguments as PTR_MAYBE_NULL on top of PTR_TRUSTED, but special case their behavior when dereferencing them or pointer arithmetic over them is involved. When passing trusted args to helpers or kfuncs, raw_tp programs are permitted to pass possibly NULL pointers in such cases. Any loads into such maybe NULL trusted PTR_TO_BTF_ID is promoted to a PROBE_MEM load to handle emanating page faults. The verifier will ensure NULL checks on such pointers are preserved and do not lead to dead code elimination. This new behavior is not applied when ref_obj_id is non-zero, as those pointers do not belong to raw_tp arguments, but instead acquired objects. Since helpers and kfuncs already require attention for PTR_TO_BTF_ID (non-trusted) pointers, we do not implement any protection for such cases in this patch set, and leave it as future work for an upcoming series. A selftest is included with this patch set to verify the new behavior, and it crashes the kernel without the first patch. [0]: https://lore.kernel.org/bpf/CAADnVQLMPPavJQR6JFsi3dtaaLHB816JN4HCV_TFWohJ61D+wQ@mail.gmail.com Changelog: ---------- v2 -> v3 v2: https://lore.kernel.org/bpf/20241103184144.3765700-1-memxor@gmail.com * Fix lenient check around check_ptr_to_btf_access allowing any PTR_TO_BTF_ID with PTR_MAYBE_NULL to be deref'd. * Add Juri and Jiri's Tested-by, Reviewed-by resp. v1 -> v2 v1: https://lore.kernel.org/bpf/20241101000017.3424165-1-memxor@gmail.com * Add patch to clean up users of gettid (Andrii) * Avoid nested blocks in sefltest (Andrii) * Prevent code motion optimization in selftest using barrier() ==================== Link: https://lore.kernel.org/r/20241104171959.2938862-1-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-04selftests/bpf: Add tests for raw_tp null handlingKumar Kartikeya Dwivedi
Ensure that trusted PTR_TO_BTF_ID accesses perform PROBE_MEM handling in raw_tp program. Without the previous fix, this selftest crashes the kernel due to a NULL-pointer dereference. Also ensure that dead code elimination does not kick in for checks on the pointer. Reviewed-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241104171959.2938862-4-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-04selftests/bpf: Clean up open-coded gettid syscall invocationsKumar Kartikeya Dwivedi
Availability of the gettid definition across glibc versions supported by BPF selftests is not certain. Currently, all users in the tree open-code syscall to gettid. Convert them to a common macro definition. Reviewed-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241104171959.2938862-3-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-04bpf: Mark raw_tp arguments with PTR_MAYBE_NULLKumar Kartikeya Dwivedi
Arguments to a raw tracepoint are tagged as trusted, which carries the semantics that the pointer will be non-NULL. However, in certain cases, a raw tracepoint argument may end up being NULL. More context about this issue is available in [0]. Thus, there is a discrepancy between the reality, that raw_tp arguments can actually be NULL, and the verifier's knowledge, that they are never NULL, causing explicit NULL checks to be deleted, and accesses to such pointers potentially crashing the kernel. To fix this, mark raw_tp arguments as PTR_MAYBE_NULL, and then special case the dereference and pointer arithmetic to permit it, and allow passing them into helpers/kfuncs; these exceptions are made for raw_tp programs only. Ensure that we don't do this when ref_obj_id > 0, as in that case this is an acquired object and doesn't need such adjustment. The reason we do mask_raw_tp_trusted_reg logic is because other will recheck in places whether the register is a trusted_reg, and then consider our register as untrusted when detecting the presence of the PTR_MAYBE_NULL flag. To allow safe dereference, we enable PROBE_MEM marking when we see loads into trusted pointers with PTR_MAYBE_NULL. While trusted raw_tp arguments can also be passed into helpers or kfuncs where such broken assumption may cause issues, a future patch set will tackle their case separately, as PTR_TO_BTF_ID (without PTR_TRUSTED) can already be passed into helpers and causes similar problems. Thus, they are left alone for now. It is possible that these checks also permit passing non-raw_tp args that are trusted PTR_TO_BTF_ID with null marking. In such a case, allowing dereference when pointer is NULL expands allowed behavior, so won't regress existing programs, and the case of passing these into helpers is the same as above and will be dealt with later. Also update the failure case in tp_btf_nullable selftest to capture the new behavior, as the verifier will no longer cause an error when directly dereference a raw tracepoint argument marked as __nullable. [0]: https://lore.kernel.org/bpf/ZrCZS6nisraEqehw@jlelli-thinkpadt14gen4.remote.csb Reviewed-by: Jiri Olsa <jolsa@kernel.org> Reported-by: Juri Lelli <juri.lelli@redhat.com> Tested-by: Juri Lelli <juri.lelli@redhat.com> Fixes: 3f00c5239344 ("bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs") Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241104171959.2938862-2-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-04bpf: Move btf_type_is_struct_ptr() under CONFIG_BPF_SYSCALLAlistair Francis
The static inline btf_type_is_struct_ptr() function calls btf_type_skip_modifiers() which is guarded by CONFIG_BPF_SYSCALL. btf_type_is_struct_ptr() is also only called by CONFIG_BPF_SYSCALL ifdef code, so let's only expose btf_type_is_struct_ptr() if CONFIG_BPF_SYSCALL is defined. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Link: https://lore.kernel.org/r/20241104060300.421403-1-alistair.francis@wdc.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-03Merge branch 'fix-resource-leak-checks-for-tail-calls'Alexei Starovoitov
Kumar Kartikeya Dwivedi says: ==================== Fix resource leak checks for tail calls This set contains a fix for detecting unreleased RCU read locks or unfinished preempt_disable sections when performing a tail call. Spin locks are prevented by accident since they don't allow any function calls, including tail calls (modelled as call instruction to a helper), so we ensure they are checked as well, in preparation for relaxing function call restricton for critical sections in the future. Then, in the second patch, all the checks for reference leaks and locks are unified into a single function that can be called from different places. This unification patch is kept separate and placed after the fix to allow independent backport of the fix to older kernels without a depdendency on the clean up. Naturally, this creates a divergence in the disparate error messages, therefore selftests that rely on the exact error strings need to be updated to match the new verifier log message. A selftest is included to ensure no regressions occur wrt this behavior. ==================== Link: https://lore.kernel.org/r/20241103225940.1408302-1-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-03selftests/bpf: Add tests for tail calls with locks and refsKumar Kartikeya Dwivedi
Add failure tests to ensure bugs don't slip through for tail calls and lingering locks, RCU sections, preemption disabled sections, and references prevent tail calls. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241103225940.1408302-4-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-03bpf: Unify resource leak checksKumar Kartikeya Dwivedi
There are similar checks for covering locks, references, RCU read sections and preempt_disable sections in 3 places in the verifer, i.e. for tail calls, bpf_ld_[abs, ind], and exit path (for BPF_EXIT and bpf_throw). Unify all of these into a common check_resource_leak function to avoid code duplication. Also update the error strings in selftests to the new ones in the same change to ensure clean bisection. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241103225940.1408302-3-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-03bpf: Tighten tail call checks for lingering locks, RCU, preempt_disableKumar Kartikeya Dwivedi
There are three situations when a program logically exits and transfers control to the kernel or another program: bpf_throw, BPF_EXIT, and tail calls. The former two check for any lingering locks and references, but tail calls currently do not. Expand the checks to check for spin locks, RCU read sections and preempt disabled sections. Spin locks are indirectly preventing tail calls as function calls are disallowed, but the checks for preemption and RCU are more relaxed, hence ensure tail calls are prevented in their presence. Fixes: 9bb00b2895cb ("bpf: Add kfunc bpf_rcu_read_lock/unlock()") Fixes: fc7566ad0a82 ("bpf: Introduce bpf_preempt_[disable,enable] kfuncs") Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241103225940.1408302-2-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-01selftests/bpf: Disable warnings on unused flags for Clang buildsViktor Malik
There exist compiler flags supported by GCC but not supported by Clang (e.g. -specs=...). Currently, these cannot be passed to BPF selftests builds, even when building with GCC, as some binaries (urandom_read and liburandom_read.so) are always built with Clang and the unsupported flags make the compilation fail (as -Werror is turned on). Add -Wno-unused-command-line-argument to these rules to suppress such errors. This allows to do things like: $ CFLAGS="-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1" \ make -C tools/testing/selftests/bpf Without this patch, the compilation would fail with: [...] clang: error: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1' [-Werror,-Wunused-command-line-argument] make: *** [Makefile:273: /bpf-next/tools/testing/selftests/bpf/liburandom_read.so] Error 1 [...] Signed-off-by: Viktor Malik <vmalik@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/2d349e9d5eb0a79dd9ff94b496769d64e6ff7654.1730449390.git.vmalik@redhat.com
2024-11-01bpftool: Prevent setting duplicate _GNU_SOURCE in MakefileViktor Malik
When building selftests with CFLAGS set via env variable, the value of CFLAGS is propagated into bpftool Makefile (called from selftests Makefile). This makes the compilation fail as _GNU_SOURCE is defined two times - once from selftests Makefile (by including lib.mk) and once from bpftool Makefile (by calling `llvm-config --cflags`): $ CFLAGS="" make -C tools/testing/selftests/bpf [...] CC /bpf-next/tools/testing/selftests/bpf/tools/build/bpftool/btf.o <command-line>: error: "_GNU_SOURCE" redefined [-Werror] <command-line>: note: this is the location of the previous definition cc1: all warnings being treated as errors [...] Filter out -D_GNU_SOURCE from the result of `llvm-config --cflags` in bpftool Makefile to prevent this error. Signed-off-by: Viktor Malik <vmalik@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <qmo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/acec3108b62d4df1436cda777e58e93e033ac7a7.1730449390.git.vmalik@redhat.com
2024-11-01bpf, bpftool: Fix incorrect disasm pcLeon Hwang
This patch addresses the bpftool issue "Wrong callq address displayed"[0]. The issue stemmed from an incorrect program counter (PC) value used during disassembly with LLVM or libbfd. For LLVM: The PC argument must represent the actual address in the kernel to compute the correct relative address. For libbfd: The relative address can be adjusted by adding func_ksym within the custom info->print_address_func to yield the correct address. Links: [0] https://github.com/libbpf/bpftool/issues/109 Changes: v2 -> v3: * Address comment from Quentin: * Remove the typedef. v1 -> v2: * Fix the broken libbfd disassembler. Fixes: e1947c750ffe ("bpftool: Refactor disassembler for JIT-ed programs") Signed-off-by: Leon Hwang <leon.hwang@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: Quentin Monnet <qmo@kernel.org> Reviewed-by: Quentin Monnet <qmo@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/bpf/20241031152844.68817-1-leon.hwang@linux.dev
2024-11-01selftests/bpf: Add a test for open coded kmem_cache iterNamhyung Kim
The new subtest runs with bpf_prog_test_run_opts() as a syscall prog. It iterates the kmem_cache using bpf_for_each loop and count the number of entries. Finally it checks it with the number of entries from the regular iterator. $ ./vmtest.sh -- ./test_progs -t kmem_cache_iter ... #130/1 kmem_cache_iter/check_task_struct:OK #130/2 kmem_cache_iter/check_slabinfo:OK #130/3 kmem_cache_iter/open_coded_iter:OK #130 kmem_cache_iter:OK Summary: 1/3 PASSED, 0 SKIPPED, 0 FAILED Also simplify the code by using attach routine of the skeleton. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20241030222819.1800667-2-namhyung@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-01bpf: Add open coded version of kmem_cache iteratorNamhyung Kim
Add a new open coded iterator for kmem_cache which can be called from a BPF program like below. It doesn't take any argument and traverses all kmem_cache entries. struct kmem_cache *pos; bpf_for_each(kmem_cache, pos) { ... } As it needs to grab slab_mutex, it should be called from sleepable BPF programs only. Also update the existing iterator code to use the open coded version internally as suggested by Andrii. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20241030222819.1800667-1-namhyung@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-30uprobes: SRCU-protect uretprobe lifetime (with timeout)Andrii Nakryiko
Avoid taking refcount on uprobe in prepare_uretprobe(), instead take uretprobe-specific SRCU lock and keep it active as kernel transfers control back to user space. Given we can't rely on user space returning from traced function within reasonable time period, we need to make sure not to keep SRCU lock active for too long, though. To that effect, we employ a timer callback which is meant to terminate SRCU lock region after predefined timeout (currently set to 100ms), and instead transfer underlying struct uprobe's lifetime protection to refcounting. This fallback to less scalable refcounting after 100ms is a fine tradeoff from uretprobe's scalability and performance perspective, because uretprobing *long running* user functions inherently doesn't run into scalability issues (there is just not enough frequency to cause noticeable issues with either performance or scalability). The overall trick is in ensuring synchronization between current thread and timer's callback fired on some other thread. To cope with that with minimal logic complications, we add hprobe wrapper which is used to contain all the synchronization related issues behind a small number of basic helpers: hprobe_expire() for "downgrading" uprobe from SRCU-protected state to refcounted state, and a hprobe_consume() and hprobe_finalize() pair of single-use consuming helpers. Other than that, whatever current thread's logic is there stays the same, as timer thread cannot modify return_instance state (or add new/remove old return_instances). It only takes care of SRCU unlock and uprobe refcounting, which is hidden from the higher-level uretprobe handling logic. We use atomic xchg() in hprobe_consume(), which is called from performance critical handle_uretprobe_chain() function run in the current context. When uncontended, this xchg() doesn't seem to hurt performance as there are no other competing CPUs fighting for the same cache line. We also mark struct return_instance as ____cacheline_aligned to ensure no false sharing can happen. Another technical moment. We need to make sure that the list of return instances can be safely traversed under RCU from timer callback, so we delay return_instance freeing with kfree_rcu() and make sure that list modifications use RCU-aware operations. Also, given SRCU lock survives transition from kernel to user space and back we need to use lower-level __srcu_read_lock() and __srcu_read_unlock() to avoid lockdep complaining. Just to give an impression of a kind of performance improvements this change brings, below are benchmarking results with and without these SRCU changes, assuming other uprobe optimizations (mainly RCU Tasks Trace for entry uprobes, lockless RB-tree lookup, and lockless VMA to uprobe lookup) are left intact: WITHOUT SRCU for uretprobes =========================== uretprobe-nop ( 1 cpus): 2.197 ± 0.002M/s ( 2.197M/s/cpu) uretprobe-nop ( 2 cpus): 3.325 ± 0.001M/s ( 1.662M/s/cpu) uretprobe-nop ( 3 cpus): 4.129 ± 0.002M/s ( 1.376M/s/cpu) uretprobe-nop ( 4 cpus): 6.180 ± 0.003M/s ( 1.545M/s/cpu) uretprobe-nop ( 8 cpus): 7.323 ± 0.005M/s ( 0.915M/s/cpu) uretprobe-nop (16 cpus): 6.943 ± 0.005M/s ( 0.434M/s/cpu) uretprobe-nop (32 cpus): 5.931 ± 0.014M/s ( 0.185M/s/cpu) uretprobe-nop (64 cpus): 5.145 ± 0.003M/s ( 0.080M/s/cpu) uretprobe-nop (80 cpus): 4.925 ± 0.005M/s ( 0.062M/s/cpu) WITH SRCU for uretprobes ======================== uretprobe-nop ( 1 cpus): 1.968 ± 0.001M/s ( 1.968M/s/cpu) uretprobe-nop ( 2 cpus): 3.739 ± 0.003M/s ( 1.869M/s/cpu) uretprobe-nop ( 3 cpus): 5.616 ± 0.003M/s ( 1.872M/s/cpu) uretprobe-nop ( 4 cpus): 7.286 ± 0.002M/s ( 1.822M/s/cpu) uretprobe-nop ( 8 cpus): 13.657 ± 0.007M/s ( 1.707M/s/cpu) uretprobe-nop (32 cpus): 45.305 ± 0.066M/s ( 1.416M/s/cpu) uretprobe-nop (64 cpus): 42.390 ± 0.922M/s ( 0.662M/s/cpu) uretprobe-nop (80 cpus): 47.554 ± 2.411M/s ( 0.594M/s/cpu) Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20241024044159.3156646-3-andrii@kernel.org
2024-10-30uprobes: allow put_uprobe() from non-sleepable softirq contextAndrii Nakryiko
Currently put_uprobe() might trigger mutex_lock()/mutex_unlock(), which makes it unsuitable to be called from more restricted context like softirq. Let's make put_uprobe() agnostic to the context in which it is called, and use work queue to defer the mutex-protected clean up steps. RB tree removal step is also moved into work-deferred callback to avoid potential deadlock between softirq-based timer callback, added in the next patch, and the rest of uprobe code. We can rework locking altogher as a follow up, but that's significantly more tricky, so warrants its own patch set. For now, we need to make sure that changes in the next patch that add timer thread work correctly with existing approach, while concentrating on SRCU + timeout logic. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20241024044159.3156646-2-andrii@kernel.org
2024-10-30perf/x86/rapl: Clean up cpumask and hotplugKan Liang
The rapl pmu is die scope, which is supported by the generic perf_event subsystem now. Set the scope for the rapl PMU and remove all the cpumask and hotplug codes. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Oliver Sang <oliver.sang@intel.com> Tested-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com> Link: https://lore.kernel.org/r/20241010142604.770192-2-kan.liang@linux.intel.com
2024-10-30perf/x86/rapl: Move the pmu allocation out of CPU hotplugKan Liang
There are extra codes in the CPU hotplug function to allocate rapl pmus. The generic PMU hotplug support is hard to be applied. As long as the rapl pmus can be allocated upfront for each die/socket, the code doesn't need to be implemented in the CPU hotplug function. Move the code to the init_rapl_pmus(), and allocate a PMU for each possible die/socket. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Oliver Sang <oliver.sang@intel.com> Link: https://lore.kernel.org/r/20241010142604.770192-1-kan.liang@linux.intel.com
2024-10-29selftests/bpf: drop unnecessary bpf_iter.h type duplicationAndrii Nakryiko
Drop bpf_iter.h header which uses vmlinux.h but re-defines a bunch of iterator structures and some of BPF constants for use in BPF iterator selftests. None of that is necessary when fresh vmlinux.h header is generated for vmlinux image that matches latest selftests. So drop ugly hacks and have a nice plain vmlinux.h usage everywhere. We could do the same with all the kfunc __ksym redefinitions, but that has dependency on very fresh pahole, so I'm not addressing that here. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20241029203919.1948941-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-29libbpf: start v1.6 development cycleAndrii Nakryiko
With libbpf v1.5.0 release out, start v1.6 dev cycle. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20241029184045.581537-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>