summaryrefslogtreecommitdiff
path: root/tools/testing/selftests/bpf/progs
AgeCommit message (Collapse)Author
2024-09-21Merge tag 'bpf-next-6.12' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Pull bpf updates from Alexei Starovoitov: - Introduce '__attribute__((bpf_fastcall))' for helpers and kfuncs with corresponding support in LLVM. It is similar to existing 'no_caller_saved_registers' attribute in GCC/LLVM with a provision for backward compatibility. It allows compilers generate more efficient BPF code assuming the verifier or JITs will inline or partially inline a helper/kfunc with such attribute. bpf_cast_to_kern_ctx, bpf_rdonly_cast, bpf_get_smp_processor_id are the first set of such helpers. - Harden and extend ELF build ID parsing logic. When called from sleepable context the relevants parts of ELF file will be read to find and fetch .note.gnu.build-id information. Also harden the logic to avoid TOCTOU, overflow, out-of-bounds problems. - Improvements and fixes for sched-ext: - Allow passing BPF iterators as kfunc arguments - Make the pointer returned from iter_next method trusted - Fix x86 JIT convergence issue due to growing/shrinking conditional jumps in variable length encoding - BPF_LSM related: - Introduce few VFS kfuncs and consolidate them in fs/bpf_fs_kfuncs.c - Enforce correct range of return values from certain LSM hooks - Disallow attaching to other LSM hooks - Prerequisite work for upcoming Qdisc in BPF: - Allow kptrs in program provided structs - Support for gen_epilogue in verifier_ops - Important fixes: - Fix uprobe multi pid filter check - Fix bpf_strtol and bpf_strtoul helpers - Track equal scalars history on per-instruction level - Fix tailcall hierarchy on x86 and arm64 - Fix signed division overflow to prevent INT_MIN/-1 trap on x86 - Fix get kernel stack in BPF progs attached to tracepoint:syscall - Selftests: - Add uprobe bench/stress tool - Generate file dependencies to drastically improve re-build time - Match JIT-ed and BPF asm with __xlated/__jited keywords - Convert older tests to test_progs framework - Add support for RISC-V - Few fixes when BPF programs are compiled with GCC-BPF backend (support for GCC-BPF in BPF CI is ongoing in parallel) - Add traffic monitor - Enable cross compile and musl libc * tag 'bpf-next-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (260 commits) btf: require pahole 1.21+ for DEBUG_INFO_BTF with default DWARF version btf: move pahole check in scripts/link-vmlinux.sh to lib/Kconfig.debug btf: remove redundant CONFIG_BPF test in scripts/link-vmlinux.sh bpf: Call the missed kfree() when there is no special field in btf bpf: Call the missed btf_record_free() when map creation fails selftests/bpf: Add a test case to write mtu result into .rodata selftests/bpf: Add a test case to write strtol result into .rodata selftests/bpf: Rename ARG_PTR_TO_LONG test description selftests/bpf: Fix ARG_PTR_TO_LONG {half-,}uninitialized test bpf: Zero former ARG_PTR_TO_{LONG,INT} args in case of error bpf: Improve check_raw_mode_ok test for MEM_UNINIT-tagged types bpf: Fix helper writes to read-only maps bpf: Remove truncation test in bpf_strtol and bpf_strtoul helpers bpf: Fix bpf_strtol and bpf_strtoul helpers for 32bit selftests/bpf: Add tests for sdiv/smod overflow cases bpf: Fix a sdiv overflow issue libbpf: Add bpf_object__token_fd accessor docs/bpf: Add missing BPF program types to docs docs/bpf: Add constant values for linkages bpf: Use fake pt_regs when doing bpf syscall tracepoint tracing ...
2024-09-13selftests/bpf: Add a test case to write mtu result into .rodataDaniel Borkmann
Add a test which attempts to call bpf_check_mtu() and writes the MTU into .rodata section of the BPF program, and for comparison this adds test cases also for .bss and .data section again. The bpf_check_mtu() is a bit more special in that the passed mtu argument is read and written by the helper (instead of just written to). Assert that writes into .rodata remain rejected by the verifier. # ./vmtest.sh -- ./test_progs -t verifier_const [...] ./test_progs -t verifier_const [ 1.657367] bpf_testmod: loading out-of-tree module taints kernel. [ 1.657773] bpf_testmod: module verification failed: signature and/or required key missing - tainting kernel #473/1 verifier_const/rodata/strtol: write rejected:OK #473/2 verifier_const/bss/strtol: write accepted:OK #473/3 verifier_const/data/strtol: write accepted:OK #473/4 verifier_const/rodata/mtu: write rejected:OK #473/5 verifier_const/bss/mtu: write accepted:OK #473/6 verifier_const/data/mtu: write accepted:OK #473 verifier_const:OK [...] Summary: 2/10 PASSED, 0 SKIPPED, 0 FAILED For comparison, without the MEM_UNINIT on bpf_check_mtu's proto: # ./vmtest.sh -- ./test_progs -t verifier_const [...] #473/3 verifier_const/data/strtol: write accepted:OK run_subtest:PASS:obj_open_mem 0 nsec run_subtest:FAIL:unexpected_load_success unexpected success: 0 #473/4 verifier_const/rodata/mtu: write rejected:FAIL #473/5 verifier_const/bss/mtu: write accepted:OK #473/6 verifier_const/data/mtu: write accepted:OK #473 verifier_const:FAIL [...] Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/r/20240913191754.13290-9-daniel@iogearbox.net Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-13selftests/bpf: Add a test case to write strtol result into .rodataDaniel Borkmann
Add a test case which attempts to write into .rodata section of the BPF program, and for comparison this adds test cases also for .bss and .data section. Before fix: # ./vmtest.sh -- ./test_progs -t verifier_const [...] ./test_progs -t verifier_const tester_init:PASS:tester_log_buf 0 nsec process_subtest:PASS:obj_open_mem 0 nsec process_subtest:PASS:specs_alloc 0 nsec run_subtest:PASS:obj_open_mem 0 nsec run_subtest:FAIL:unexpected_load_success unexpected success: 0 #465/1 verifier_const/rodata: write rejected:FAIL #465/2 verifier_const/bss: write accepted:OK #465/3 verifier_const/data: write accepted:OK #465 verifier_const:FAIL [...] After fix: # ./vmtest.sh -- ./test_progs -t verifier_const [...] ./test_progs -t verifier_const #465/1 verifier_const/rodata: write rejected:OK #465/2 verifier_const/bss: write accepted:OK #465/3 verifier_const/data: write accepted:OK #465 verifier_const:OK [...] Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240913191754.13290-8-daniel@iogearbox.net Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-13selftests/bpf: Rename ARG_PTR_TO_LONG test descriptionDaniel Borkmann
Given we got rid of ARG_PTR_TO_LONG, change the test case description to avoid potential confusion: # ./vmtest.sh -- ./test_progs -t verifier_int_ptr [...] ./test_progs -t verifier_int_ptr [ 1.610563] bpf_testmod: loading out-of-tree module taints kernel. [ 1.611049] bpf_testmod: module verification failed: signature and/or required key missing - tainting kernel #489/1 verifier_int_ptr/arg pointer to long uninitialized:OK #489/2 verifier_int_ptr/arg pointer to long half-uninitialized:OK #489/3 verifier_int_ptr/arg pointer to long misaligned:OK #489/4 verifier_int_ptr/arg pointer to long size < sizeof(long):OK #489/5 verifier_int_ptr/arg pointer to long initialized:OK #489 verifier_int_ptr:OK Summary: 1/5 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/r/20240913191754.13290-7-daniel@iogearbox.net Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-13selftests/bpf: Fix ARG_PTR_TO_LONG {half-,}uninitialized testDaniel Borkmann
The assumption of 'in privileged mode reads from uninitialized stack locations are permitted' is not quite correct since the verifier was probing for read access rather than write access. Both tests need to be annotated as __success for privileged and unprivileged. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240913191754.13290-6-daniel@iogearbox.net Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-13selftests/bpf: Add tests for sdiv/smod overflow casesYonghong Song
Subtests are added to exercise the patched code which handles - LLONG_MIN/-1 - INT_MIN/-1 - LLONG_MIN%-1 - INT_MIN%-1 where -1 could be an immediate or in a register. Without the previous patch, all these cases will crash the kernel on x86_64 platform. Additional tests are added to use small values (e.g. -5/-1, 5%-1, etc.) in order to exercise the additional logic with patched insns. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240913150332.1188102-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-11selftests/bpf: Fix arena_atomics failure due to llvm changeYonghong Song
llvm change [1] made a change such that __sync_fetch_and_{and,or,xor}() will generate atomic_fetch_*() insns even if the return value is not used. This is a deliberate choice to make sure barrier semantics are preserved from source code to asm insn. But the change in [1] caused arena_atomics selftest failure. test_arena_atomics:PASS:arena atomics skeleton open 0 nsec libbpf: prog 'and': BPF program load failed: Permission denied libbpf: prog 'and': -- BEGIN PROG LOAD LOG -- arg#0 reference type('UNKNOWN ') size cannot be determined: -22 0: R1=ctx() R10=fp0 ; if (pid != (bpf_get_current_pid_tgid() >> 32)) @ arena_atomics.c:87 0: (18) r1 = 0xffffc90000064000 ; R1_w=map_value(map=arena_at.bss,ks=4,vs=4) 2: (61) r6 = *(u32 *)(r1 +0) ; R1_w=map_value(map=arena_at.bss,ks=4,vs=4) R6_w=scalar(smin=0,smax=umax=0xffffffff,v ar_off=(0x0; 0xffffffff)) 3: (85) call bpf_get_current_pid_tgid#14 ; R0_w=scalar() 4: (77) r0 >>= 32 ; R0_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff)) 5: (5d) if r0 != r6 goto pc+11 ; R0_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff)) R6_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0x) ; __sync_fetch_and_and(&and64_value, 0x011ull << 32); @ arena_atomics.c:91 6: (18) r1 = 0x100000000060 ; R1_w=scalar() 8: (bf) r1 = addr_space_cast(r1, 0, 1) ; R1_w=arena 9: (18) r2 = 0x1100000000 ; R2_w=0x1100000000 11: (db) r2 = atomic64_fetch_and((u64 *)(r1 +0), r2) BPF_ATOMIC stores into R1 arena is not allowed processed 9 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 -- END PROG LOAD LOG -- libbpf: prog 'and': failed to load: -13 libbpf: failed to load object 'arena_atomics' libbpf: failed to load BPF skeleton 'arena_atomics': -13 test_arena_atomics:FAIL:arena atomics skeleton load unexpected error: -13 (errno 13) #3 arena_atomics:FAIL The reason of the failure is due to [2] where atomic{64,}_fetch_{and,or,xor}() are not allowed by arena addresses. Version 2 of the patch fixed the issue by using inline asm ([3]). But further discussion suggested to find a way from source to generate locked insn which is more user friendly. So in not-merged llvm patch ([4]), if relax memory ordering is used and the return value is not used, locked insn could be generated. So with llvm patch [4] to compile the bpf selftest, the following code __c11_atomic_fetch_and(&and64_value, 0x011ull << 32, memory_order_relaxed); is able to generate locked insn, hence fixing the selftest failure. [1] https://github.com/llvm/llvm-project/pull/106494 [2] d503a04f8bc0 ("bpf: Add support for certain atomics in bpf_arena to x86 JIT") [3] https://lore.kernel.org/bpf/20240803025928.4184433-1-yonghong.song@linux.dev/ [4] https://github.com/llvm/llvm-project/pull/107343 Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240909223431.1666305-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-11selftests/bpf: add build ID testsAndrii Nakryiko
Add a new set of tests validating behavior of capturing stack traces with build ID. We extend uprobe_multi target binary with ability to trigger uprobe (so that we can capture stack traces from it), but also we allow to force build ID data to be either resident or non-resident in memory (see also a comment about quirks of MADV_PAGEOUT). That way we can validate that in non-sleepable context we won't get build ID (as expected), but with sleepable uprobes we will get that build ID regardless of it being physically present in memory. Also, we add a small add-on linker script which reorders .note.gnu.build-id section and puts it after (big) .text section, putting build ID data outside of the very first page of ELF file. This will test all the relaxations we did in build ID parsing logic in kernel thanks to freader abstraction. Reviewed-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240829174232.3133883-11-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-11selftests/bpf: Expand skb dynptr selftests for tp_btfPhilo Lu
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #84/14 dynptr/test_dynptr_skb_tp_btf:OK #84 dynptr:OK #127 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #84/85 dynptr/skb_invalid_ctx_fentry:OK #84/86 dynptr/skb_invalid_ctx_fexit:OK #84 dynptr:OK #127 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <lulie@linux.alibaba.com> Link: https://lore.kernel.org/r/20240911033719.91468-6-lulie@linux.alibaba.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-09-11selftests/bpf: Add test for __nullable suffix in tp_btfPhilo Lu
Add a tracepoint with __nullable suffix in bpf_testmod, and add cases for it: $ ./test_progs -t "tp_btf_nullable" #406/1 tp_btf_nullable/handle_tp_btf_nullable_bare1:OK #406/2 tp_btf_nullable/handle_tp_btf_nullable_bare2:OK #406 tp_btf_nullable:OK Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Philo Lu <lulie@linux.alibaba.com> Link: https://lore.kernel.org/r/20240911033719.91468-3-lulie@linux.alibaba.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-09-09bpf: Fix error message on kfunc arg type mismatchMaxim Mikityanskiy
When "arg#%d expected pointer to ctx, but got %s" error is printed, both template parts actually point to the type of the argument, therefore, it will also say "but got PTR", regardless of what was the actual register type. Fix the message to print the register type in the second part of the template, change the existing test to adapt to the new format, and add a new test to test the case when arg is a pointer to context, but reg is a scalar. Fixes: 00b85860feb8 ("bpf: Rewrite kfunc argument handling") Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20240909133909.1315460-1-maxim@isovalent.com
2024-09-05bpf/selftests: coverage for tp and perf event progs using kfuncsJP Kobryn
This coverage ensures that kfuncs are allowed within tracepoint and perf event programs. Signed-off-by: JP Kobryn <inwardvessel@gmail.com> Link: https://lore.kernel.org/r/20240905223812.141857-3-inwardvessel@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-05selftests/bpf: Rename fallback in bpf_dctcp to avoid naming conflictPu Lehui
Recently, when compiling bpf selftests on RV64, the following compilation failure occurred: progs/bpf_dctcp.c:29:21: error: redefinition of 'fallback' as different kind of symbol 29 | volatile const char fallback[TCP_CA_NAME_MAX]; | ^ /workspace/tools/testing/selftests/bpf/tools/include/vmlinux.h:86812:15: note: previous definition is here 86812 | typedef u32 (*fallback)(u32, const unsigned char *, size_t); The reason is that the `fallback` symbol has been defined in arch/riscv/lib/crc32.c, which will cause symbol conflicts when vmlinux.h is included in bpf_dctcp. Let we rename `fallback` string to `fallback_cc` in bpf_dctcp to fix this compilation failure. Signed-off-by: Pu Lehui <pulehui@huawei.com> Link: https://lore.kernel.org/r/20240905081401.1894789-3-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-05selftests/bpf: fix some typos in selftestsLin Yikai
Hi, fix some spelling errors in selftest, the details are as follows: -in the codes: test_bpf_sk_stoarge_map_iter_fd(void) ->test_bpf_sk_storage_map_iter_fd(void) load BTF from btf_data.o->load BTF from btf_data.bpf.o -in the code comments: preample->preamble multi-contollers->multi-controllers errono->errno unsighed/unsinged->unsigned egree->egress shoud->should regsiter->register assummed->assumed conditiona->conditional rougly->roughly timetamp->timestamp ingores->ignores null-termainted->null-terminated slepable->sleepable implemenation->implementation veriables->variables timetamps->timestamps substitue a costant->substitute a constant secton->section unreferened->unreferenced verifer->verifier libppf->libbpf ... Signed-off-by: Lin Yikai <yikai.lin@vivo.com> Link: https://lore.kernel.org/r/20240905110354.3274546-1-yikai.lin@vivo.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-05selftests/bpf: Add uprobe multi pid filter test for fork-ed processesJiri Olsa
The idea is to create and monitor 3 uprobes, each trigered in separate process and make sure the bpf program gets executed just for the proper PID specified via pid filter. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240905115124.1503998-4-jolsa@kernel.org
2024-09-04selftests/bpf: Enable test_bpf_syscall_macro: Syscall_arg1 on s390 and arm64Pu Lehui
Considering that CO-RE direct read access to the first system call argument is already available on s390 and arm64, let's enable test_bpf_syscall_macro:syscall_arg1 on these architectures. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240831041934.1629216-4-pulehui@huaweicloud.com
2024-09-04selftests/bpf: Add a selftest for x86 jit convergence issuesYonghong Song
The core part of the selftest, i.e., the je <-> jmp cycle, mimics the original sched-ext bpf program. The test will fail without the previous patch. I tried to create some cases for other potential cycles (je <-> je, jmp <-> je and jmp <-> jmp) with similar pattern to the test in this patch, but failed. So this patch only contains one test for je <-> jmp cycle. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240904221256.37389-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-04selftests: bpf: Replace sizeof(arr)/sizeof(arr[0]) with ARRAY_SIZEFeng Yang
The ARRAY_SIZE macro is more compact and more formal in linux source. Signed-off-by: Feng Yang <yangfeng@kylinos.cn> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240903072559.292607-1-yangfeng59949@163.com
2024-08-29selftests/bpf: Add tests for iter next method returning valid pointerJuntong Deng
This patch adds test cases for iter next method returning valid pointer, which can also used as usage examples. Currently iter next method should return valid pointer. iter_next_trusted is the correct usage and test if iter next method return valid pointer. bpf_iter_task_vma_next has KF_RET_NULL flag, so the returned pointer may be NULL. We need to check if the pointer is NULL before using it. iter_next_trusted_or_null is the incorrect usage. There is no checking before using the pointer, so it will be rejected by the verifier. iter_next_rcu and iter_next_rcu_or_null are similar test cases for KF_RCU_PROTECTED iterators. iter_next_rcu_not_trusted is used to test that the pointer returned by iter next method of KF_RCU_PROTECTED iterator cannot be passed in KF_TRUSTED_ARGS kfuncs. iter_next_ptr_mem_not_trusted is used to test that base type PTR_TO_MEM should not be combined with type flag PTR_TRUSTED. Signed-off-by: Juntong Deng <juntong.deng@outlook.com> Link: https://lore.kernel.org/r/AM6PR03MB5848709758F6922F02AF9F1F99962@AM6PR03MB5848.eurprd03.prod.outlook.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-29selftests/bpf: Test epilogue patching when the main prog has multiple BPF_EXITMartin KaFai Lau
This patch tests the epilogue patching when the main prog has multiple BPF_EXIT. The verifier should have patched the 2nd (and later) BPF_EXIT with a BPF_JA that goes back to the earlier patched epilogue instructions. Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20240829210833.388152-10-martin.lau@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-29selftests/bpf: A pro/epilogue test when the main prog jumps back to the 1st insnMartin KaFai Lau
This patch adds a pro/epilogue test when the main prog has a goto insn that goes back to the very first instruction of the prog. It is to test the correctness of the adjust_jmp_off(prog, 0, delta) after the verifier has applied the prologue and/or epilogue patch. Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20240829210833.388152-9-martin.lau@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-29selftests/bpf: Add tailcall epilogue testMartin KaFai Lau
This patch adds a gen_epilogue test to test a main prog using a bpf_tail_call. A non test_loader test is used. The tailcall target program, "test_epilogue_subprog", needs to be used in a struct_ops map before it can be loaded. Another struct_ops map is also needed to host the actual "test_epilogue_tailcall" struct_ops program that does the bpf_tail_call. The earlier test_loader patch will attach all struct_ops maps but the bpf_testmod.c does not support >1 attached struct_ops. The earlier patch used the test_loader which has already covered checking for the patched pro/epilogue instructions. This is done by the __xlated tag. This patch goes for the regular skel load and syscall test to do the tailcall test that can also allow to directly pass the the "struct st_ops_args *args" as ctx_in to the SEC("syscall") program. Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20240829210833.388152-8-martin.lau@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-29selftests/bpf: Test gen_prologue and gen_epilogueMartin KaFai Lau
This test adds a new struct_ops "bpf_testmod_st_ops" in bpf_testmod. The ops of the bpf_testmod_st_ops is triggered by new kfunc calls "bpf_kfunc_st_ops_test_*logue". These new kfunc calls are primarily used by the SEC("syscall") program. The test triggering sequence is like: SEC("syscall") syscall_prologue(struct st_ops_args *args) bpf_kfunc_st_op_test_prologue(args) st_ops->test_prologue(args) .gen_prologue adds 1000 to args->a .gen_epilogue adds 10000 to args->a .gen_epilogue will also set the r0 to 2 * args->a. The .gen_prologue and .gen_epilogue of the bpf_testmod_st_ops will test the prog->aux->attach_func_name to decide if it needs to generate codes. The main programs of the pro_epilogue.c will call a new kfunc bpf_kfunc_st_ops_inc10 which does "args->a += 10". It will also call a subprog() which does "args->a += 1". This patch uses the test_loader infra to check the __xlated instructions patched after gen_prologue and/or gen_epilogue. The __xlated check is based on Eduard's example (Thanks!) in v1. args->a is returned by the struct_ops prog (either the main prog or the epilogue). Thus, the __retval of the SEC("syscall") prog is checked. For example, when triggering the ops in the 'SEC("struct_ops/test_epilogue") int test_epilogue' The expected args->a is +1 (subprog call) + 10 (kfunc call) + 10000 (.gen_epilogue) = 10011. The expected return value is 2 * 10011 (.gen_epilogue). Suggested-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20240829210833.388152-7-martin.lau@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-29selftests/bpf: Make sure stashed kptr in local kptr is freed recursivelyAmery Hung
When dropping a local kptr, any kptr stashed into it is supposed to be freed through bpf_obj_free_fields->__bpf_obj_drop_impl recursively. Add a test to make sure it happens. The test first stashes a referenced kptr to "struct task" into a local kptr and gets the reference count of the task. Then, it drops the local kptr and reads the reference count of the task again. Since bpf_obj_free_fields and __bpf_obj_drop_impl will go through the local kptr recursively during bpf_obj_drop, the dtor of the stashed task kptr should eventually be called. The second reference count should be one less than the first one. Signed-off-by: Amery Hung <amery.hung@bytedance.com> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20240827011301.608620-1-amery.hung@bytedance.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-28selftests/bpf: Add test for zero offset or non-zero offset pointers as ↵Juntong Deng
KF_ACQUIRE kfuncs argument This patch adds test cases for zero offset (implicit cast) or non-zero offset pointer as KF_ACQUIRE kfuncs argument. Currently KF_ACQUIRE kfuncs should support passing in pointers like &sk->sk_write_queue (non-zero offset) or &sk->__sk_common (zero offset) and not be rejected by the verifier. Signed-off-by: Juntong Deng <juntong.deng@outlook.com> Link: https://lore.kernel.org/r/AM6PR03MB5848CB6F0D4D9068669A905B99952@AM6PR03MB5848.eurprd03.prod.outlook.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-26Merge tag 'for-netdev' of ↵Jakub Kicinski
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Daniel Borkmann says: ==================== pull-request: bpf-next 2024-08-23 We've added 10 non-merge commits during the last 15 day(s) which contain a total of 10 files changed, 222 insertions(+), 190 deletions(-). The main changes are: 1) Add TCP_BPF_SOCK_OPS_CB_FLAGS to bpf_*sockopt() to address the case when long-lived sockets miss a chance to set additional callbacks if a sockops program was not attached early in their lifetime, from Alan Maguire. 2) Add a batch of BPF selftest improvements which fix a few bugs and add missing features to improve the test coverage of sockmap/sockhash, from Michal Luczaj. 3) Fix a false-positive Smatch-reported off-by-one in tcp_validate_cookie() which is part of the test_tcp_custom_syncookie BPF selftest, from Kuniyuki Iwashima. 4) Fix the flow_dissector BPF selftest which had a bug in IP header's tot_len calculation doing subtraction after htons() instead of inside htons(), from Asbjørn Sloth Tønnesen. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: selftest: bpf: Remove mssind boundary check in test_tcp_custom_syncookie.c. selftests/bpf: Introduce __attribute__((cleanup)) in create_pair() selftests/bpf: Exercise SOCK_STREAM unix_inet_redir_to_connected() selftests/bpf: Honour the sotype of af_unix redir tests selftests/bpf: Simplify inet_socketpair() and vsock_socketpair_connectible() selftests/bpf: Socket pair creation, cleanups selftests/bpf: Support more socket types in create_pair() selftests/bpf: Avoid subtraction after htons() in ipip tests selftests/bpf: add sockopt tests for TCP_BPF_SOCK_OPS_CB_FLAGS bpf/bpf_get,set_sockopt: add option to set TCP-BPF sock ops flags ==================== Link: https://patch.msgid.link/20240823134959.1091-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-23selftests/bpf: Add tests for bpf_copy_from_user_str kfunc.Jordan Rome
This adds tests for both the happy path and the error path. Signed-off-by: Jordan Rome <linux@jordanrome.com> Link: https://lore.kernel.org/r/20240823195101.3621028-2-linux@jordanrome.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-23selftests/bpf: Test bpf_kptr_xchg stashing into local kptrDave Marchevsky
Test stashing both referenced kptr and local kptr into local kptrs. Then, test unstashing them. Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Acked-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Amery Hung <amery.hung@bytedance.com> Link: https://lore.kernel.org/r/20240813212424.2871455-6-amery.hung@bytedance.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-23selftests/bpf: add multi-uprobe benchmarksAndrii Nakryiko
Add multi-uprobe and multi-uretprobe benchmarks to bench tool. Multi- and classic uprobes/uretprobes have different low-level triggering code paths, so it's sometimes important to be able to benchmark both flavors of uprobes/uretprobes. Sample examples from my dev machine below. Single-threaded peformance almost doesn't differ, but with more parallel CPUs triggering the same uprobe/uretprobe the difference grows. This might be due to [0], but given the code is slightly different, there could be other sources of slowdown. Note, all these numbers will change due to ongoing work to improve uprobe/uretprobe scalability (e.g., [1]), but having benchmark like this is useful for measurements and debugging nevertheless. \#!/bin/bash set -eufo pipefail for p in 1 8 16 32; do for i in uprobe-nop uretprobe-nop uprobe-multi-nop uretprobe-multi-nop; do summary=$(sudo ./bench -w1 -d3 -p$p -a trig-$i | tail -n1) total=$(echo "$summary" | cut -d'(' -f1 | cut -d' ' -f3-) percpu=$(echo "$summary" | cut -d'(' -f2 | cut -d')' -f1 | cut -d'/' -f1) printf "%-21s (%2d cpus): %s (%s/s/cpu)\n" $i $p "$total" "$percpu" done echo done uprobe-nop ( 1 cpus): 1.020 ± 0.005M/s ( 1.020M/s/cpu) uretprobe-nop ( 1 cpus): 0.515 ± 0.009M/s ( 0.515M/s/cpu) uprobe-multi-nop ( 1 cpus): 1.036 ± 0.004M/s ( 1.036M/s/cpu) uretprobe-multi-nop ( 1 cpus): 0.512 ± 0.005M/s ( 0.512M/s/cpu) uprobe-nop ( 8 cpus): 3.481 ± 0.030M/s ( 0.435M/s/cpu) uretprobe-nop ( 8 cpus): 2.222 ± 0.008M/s ( 0.278M/s/cpu) uprobe-multi-nop ( 8 cpus): 3.769 ± 0.094M/s ( 0.471M/s/cpu) uretprobe-multi-nop ( 8 cpus): 2.482 ± 0.007M/s ( 0.310M/s/cpu) uprobe-nop (16 cpus): 2.968 ± 0.011M/s ( 0.185M/s/cpu) uretprobe-nop (16 cpus): 1.870 ± 0.002M/s ( 0.117M/s/cpu) uprobe-multi-nop (16 cpus): 3.541 ± 0.037M/s ( 0.221M/s/cpu) uretprobe-multi-nop (16 cpus): 2.123 ± 0.026M/s ( 0.133M/s/cpu) uprobe-nop (32 cpus): 2.524 ± 0.026M/s ( 0.079M/s/cpu) uretprobe-nop (32 cpus): 1.572 ± 0.003M/s ( 0.049M/s/cpu) uprobe-multi-nop (32 cpus): 2.717 ± 0.003M/s ( 0.085M/s/cpu) uretprobe-multi-nop (32 cpus): 1.687 ± 0.007M/s ( 0.053M/s/cpu) [0] https://lore.kernel.org/linux-trace-kernel/20240805202803.1813090-1-andrii@kernel.org/ [1] https://lore.kernel.org/linux-trace-kernel/20240731214256.3588718-1-andrii@kernel.org/ Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20240806042935.3867862-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-23selftests/bpf: match both retq/rethunk in verifier_tailcall_jitEduard Zingerman
Depending on kernel parameters, x86 jit generates either retq or jump to rethunk for 'exit' instruction. The difference could be seen when kernel is booted with and without mitigations=off parameter. Relax the verifier_tailcall_jit test case to match both variants. Fixes: e5bdd6a8be78 ("selftests/bpf: validate jit behaviour for tail calls") Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240823080644.263943-3-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-22selftests/bpf: Add testcase for updating attached freplace prog to ↵Leon Hwang
prog_array map Add a selftest to confirm the issue, which gets -EINVAL when update attached freplace prog to prog_array map, has been fixed. cd tools/testing/selftests/bpf; ./test_progs -t tailcalls 328/25 tailcalls/tailcall_freplace:OK 328 tailcalls:OK Summary: 1/25 PASSED, 0 SKIPPED, 0 FAILED Acked-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Leon Hwang <leon.hwang@linux.dev> Link: https://lore.kernel.org/r/20240728114612.48486-3-leon.hwang@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfAlexei Starovoitov
Cross-merge bpf fixes after downstream PR including important fixes (from bpf-next point of view): commit 41c24102af7b ("selftests/bpf: Filter out _GNU_SOURCE when compiling test_cpp") commit fdad456cbcca ("bpf: Fix updating attached freplace prog in prog_array map") No conflicts. Adjacent changes in: include/linux/bpf_verifier.h kernel/bpf/verifier.c tools/testing/selftests/bpf/Makefile Link: https://lore.kernel.org/bpf/20240813234307.82773-1-alexei.starovoitov@gmail.com/ Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-22selftests/bpf: check if bpf_fastcall is recognized for kfuncsEduard Zingerman
Use kfunc_bpf_cast_to_kern_ctx() and kfunc_bpf_rdonly_cast() to verify that bpf_fastcall pattern is recognized for kfunc calls. Acked-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240822084112.3257995-7-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-22selftests/bpf: rename nocsr -> bpf_fastcall in selftestsEduard Zingerman
Attribute used by LLVM implementation of the feature had been changed from no_caller_saved_registers to bpf_fastcall (see [1]). This commit replaces references to nocsr by references to bpf_fastcall to keep LLVM and selftests parts in sync. [1] https://github.com/llvm/llvm-project/pull/105417 Acked-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240822084112.3257995-3-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-21selftest: bpf: Remove mssind boundary check in test_tcp_custom_syncookie.c.Kuniyuki Iwashima
Smatch reported a possible off-by-one in tcp_validate_cookie(). However, it's false positive because the possible range of mssind is limited from 0 to 3 by the preceding calculation. mssind = (cookie & (3 << 6)) >> 6; Now, the verifier does not complain without the boundary check. Let's remove the checks. Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Closes: https://lore.kernel.org/bpf/6ae12487-d3f1-488b-9514-af0dac96608f@stanley.mountain/ Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240821013425.49316-1-kuniyu@amazon.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-08-21selftests/bpf: validate __xlated same way as __jitedEduard Zingerman
Both __xlated and __jited work with disassembly. It is logical to have both work in a similar manner. This commit updates __xlated macro handling in test_loader.c by making it expect matches on sequential lines, same way as __jited operates. For example: __xlated("1: *(u64 *)(r10 -16) = r1") ;; matched on line N __xlated("3: r0 = &(void __percpu *)(r0)") ;; matched on line N+1 Also: __xlated("1: *(u64 *)(r10 -16) = r1") ;; matched on line N __xlated("...") ;; not matched __xlated("3: r0 = &(void __percpu *)(r0)") ;; mantched on any ;; line >= N Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240820102357.3372779-10-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-21selftests/bpf: validate jit behaviour for tail callsEduard Zingerman
A program calling sub-program which does a tail call. The idea is to verify instructions generated by jit for tail calls: - in program and sub-program prologues; - for subprogram call instruction; - for tail call itself. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240820102357.3372779-9-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-21selftests/bpf: __jited test tag to check disassembly after jitEduard Zingerman
Allow to verify jit behaviour by writing tests as below: SEC("tp") __arch_x86_64 __jited(" endbr64") __jited(" nopl (%rax,%rax)") __jited(" xorq %rax, %rax") ... __naked void some_test(void) { asm volatile (... ::: __clobber_all); } Allow regular expressions in patterns, same way as in __msg. By default assume that each __jited pattern has to be matched on the next consecutive line of the disassembly, e.g.: __jited(" endbr64") # matched on line N __jited(" nopl (%rax,%rax)") # matched on line N+1 If match occurs on a wrong line an error is reported. To override this behaviour use __jited("..."), e.g.: __jited(" endbr64") # matched on line N __jited("...") # not matched __jited(" nopl (%rax,%rax)") # matched on any line >= N Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240820102357.3372779-7-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-21selftests/bpf: replace __regex macro with "{{...}}" patternsEduard Zingerman
Upcoming changes require a notation to specify regular expression matches for regular verifier log messages, disassembly of BPF instructions, disassembly of jited instructions. Neither basic nor extended POSIX regular expressions w/o additional escaping are good for this role because of wide use of special characters in disassembly, for example: movq -0x10(%rbp), %rax ;; () are special characters cmpq $0x21, %rax ;; $ is a special character *(u64 *)(r10 -16) = r1 ;; * and () are special characters This commit borrows syntax from LLVM's FileCheck utility. It replaces __regex macro with ability to embed regular expressions in __msg patters using "{{" "}}" pairs for escaping. Syntax for __msg patterns: pattern := (<verbatim text> | regex)* regex := "{{" <posix extended regular expression> "}}" For example, pattern "foo{{[0-9]+}}" matches strings like "foo0", "foo007", etc. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240820102357.3372779-5-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-21selftests/bpf: fix to avoid __msg tag de-duplication by clangEduard Zingerman
__msg, __regex and __xlated tags are based on __attribute__((btf_decl_tag("..."))) annotations. Clang de-duplicates such annotations, e.g. the following two sequences of tags are identical in final BTF: /* seq A */ /* seq B */ __tag("foo") __tag("foo") __tag("bar") __tag("bar") __tag("foo") Fix this by adding a unique suffix for each tag using __COUNTER__ pre-processor macro. E.g. here is a new definition for __msg: #define __msg(msg) \ __attribute__((btf_decl_tag("comment:test_expect_msg=" XSTR(__COUNTER__) "=" msg))) Using this definition the "seq A" from example above is translated to BTF as follows: [..] DECL_TAG 'comment:test_expect_msg=0=foo' type_id=X component_idx=-1 [..] DECL_TAG 'comment:test_expect_msg=1=bar' type_id=X component_idx=-1 [..] DECL_TAG 'comment:test_expect_msg=2=foo' type_id=X component_idx=-1 Surprisingly, this bug affects a single existing test: verifier_spill_fill/old_stack_misc_vs_cur_ctx_ptr, where sequence of identical messages was expected in the log. Fixes: 537c3f66eac1 ("selftests/bpf: add generic BPF program tester-loader") Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240820102357.3372779-4-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-21selftests/bpf: test passing iterator to a kfuncAndrii Nakryiko
Define BPF iterator "getter" kfunc, which accepts iterator pointer as one of the arguments. Make sure that argument passed doesn't have to be the very first argument (unlike new-next-destroy combo). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240808232230.2848712-4-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-19selftest/bpf: Adapt inline asm operand constraint for GCC supportCupertino Miranda
GCC errors when compiling tailcall_bpf2bpf_hierarchy2.c and tailcall_bpf2bpf_hierarchy3.c with the following error: progs/tailcall_bpf2bpf_hierarchy2.c: In function 'tailcall_bpf2bpf_hierarchy_2': progs/tailcall_bpf2bpf_hierarchy2.c:66:9: error: input operand constraint contains '+' 66 | asm volatile (""::"r+"(ret)); | ^~~ Changed implementation to make use of __sink macro that abstracts the desired behaviour. The proposed change seems valid for both GCC and CLANG. Signed-off-by: Cupertino Miranda <cupertino.miranda@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240819151129.1366484-4-cupertino.miranda@oracle.com
2024-08-14selftests/bpf: convert test_skb_cgroup_id_user to test_progsAlexis Lothoré (eBPF Foundation)
test_skb_cgroup_id_user allows testing skb cgroup id retrieval at different levels, but is not integrated in test_progs, so it is not run automatically in CI. The test overlaps a bit with cgroup_skb_sk_lookup_kern, which is integrated in test_progs and test extensively skb cgroup helpers, but there is still one major difference between the two tests which justifies the conversion: cgroup_skb_sk_lookup_kern deals with a BPF_PROG_TYPE_CGROUP_SKB (attached on a cgroup), while test_skb_cgroup_id_user deals with a BPF_PROG_TYPE_SCHED_CLS (attached on a qdisc) Convert test_skb_cgroup_id_user into test_progs framework in order to run it automatically in CI. The main differences with the original test are the following: - rename the test to make it shorter and more straightforward regarding tested feature - the wrapping shell script has been dropped since every setup step is now handled in the main C test file - the test has been renamed for a shorter name and reflecting the tested API - add dedicated assert log per level to ease test failure debugging - use global variables instead of maps to access bpf prog data Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com> Link: https://lore.kernel.org/r/20240813-convert_cgroup_tests-v4-4-a33c03458cf6@bootlin.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-08-14selftests/bpf: add proper section name to bpf prog and rename itAlexis Lothoré (eBPF Foundation)
test_skb_cgroup_id_kern.c is currently involved in a manual test. In its current form, it can not be used with the auto-generated skeleton APIs, because the section name is not valid to allow libbpf to deduce the program type. Update section name to allow skeleton APIs usage. Also rename the program name to make it shorter and more straighforward regarding the API it is testing. While doing so, make sure that test_skb_cgroup_id.sh passes to get a working reference before converting it to test_progs - update the obj name - fix loading issue (verifier rejecting the program when loaded through tc, because of map not found), by preloading the whole obj with bpftool Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com> Link: https://lore.kernel.org/r/20240813-convert_cgroup_tests-v4-3-a33c03458cf6@bootlin.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-08-14selftests/bpf: convert test_cgroup_storage to test_progsAlexis Lothoré (eBPF Foundation)
test_cgroup_storage is currently a standalone program which is not run when executing test_progs. Convert it to the test_progs framework so it can be automatically executed in CI. The conversion led to the following changes: - converted the raw bpf program in the userspace test file into a dedicated test program in progs/ dir - reduced the scope of cgroup_storage test: the content from this test overlaps with some other tests already present in test_progs, most notably netcnt and cgroup_storage_multi*. Those tests already check extensively local storage, per-cpu local storage, cgroups interaction, etc. So the new test only keep the part testing that the program return code (based on map content) properly leads to packet being passed or dropped. Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com> Link: https://lore.kernel.org/r/20240813-convert_cgroup_tests-v4-2-a33c03458cf6@bootlin.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-08-14selftests/bpf: convert get_current_cgroup_id_user to test_progsAlexis Lothoré (eBPF Foundation)
get_current_cgroup_id_user allows testing for bpf_get_current_cgroup_id() bpf API but is not integrated into test_progs, and so is not tested automatically in CI. Convert it to the test_progs framework to allow running it automatically. The most notable differences with the old test are the following: - the new test relies on autoattach instead of manually hooking/enabling the targeted tracepoint through perf_event, which reduces quite a lot the test code size - it also accesses bpf prog data through global variables instead of maps - sleep duration passed to nanosleep syscall has been reduced to its minimum to not impact overall CI duration (we only care about the syscall being properly triggered, not about the passed duration) Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com> Link: https://lore.kernel.org/r/20240813-convert_cgroup_tests-v4-1-a33c03458cf6@bootlin.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-08-12selftests/bpf: Add a test to verify previous stacksafe() fixYonghong Song
A selftest is added such that without the previous patch, a crash can happen. With the previous patch, the test can run successfully. The new test is written in a way which mimics original crash case: main_prog static_prog_1 static_prog_2 where static_prog_1 has different paths to static_prog_2 and some path has stack allocated and some other path does not. A stacksafe() checking in static_prog_2() triggered the crash. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240812214852.214037-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-08selftests/bpf: add sockopt tests for TCP_BPF_SOCK_OPS_CB_FLAGSAlan Maguire
Add tests to set TCP sockopt TCP_BPF_SOCK_OPS_CB_FLAGS via bpf_setsockopt() and use a cgroup/getsockopt program to retrieve the value to verify it was set. Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/r/20240808150558.1035626-3-alan.maguire@oracle.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-08-07selftests/bpf: Add tests for bpf_get_dentry_xattrSong Liu
Add test for bpf_get_dentry_xattr on hook security_inode_getxattr. Verify that the kfunc can read the xattr. Also test failing getxattr from user space by returning non-zero from the LSM bpf program. Acked-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240806230904.71194-4-song@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-06selftests/bpf: add positive tests for new VFS based BPF kfuncsMatt Bobrowski
Add a bunch of positive selftests which extensively cover the various contexts and parameters in which the new VFS based BPF kfuncs may be used from. Again, the following VFS based BPF kfuncs are thoroughly tested within this new selftest: * struct file *bpf_get_task_exe_file(struct task_struct *); * void bpf_put_file(struct file *); * int bpf_path_d_path(struct path *, char *, size_t); Acked-by: Christian Brauner <brauner@kernel.org> Acked-by: Song Liu <song@kernel.org> Signed-off-by: Matt Bobrowski <mattbobrowski@google.com> Link: https://lore.kernel.org/r/20240731110833.1834742-4-mattbobrowski@google.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>