summaryrefslogtreecommitdiff
path: root/tools/lib/bpf/libbpf.c
AgeCommit message (Collapse)Author
2021-09-14libbpf: Introduce legacy kprobe events supportRafael David Tinoco
Allow kprobe tracepoint events creation through legacy interface, as the kprobe dynamic PMUs support, used by default, was only created in v4.17. Store legacy kprobe name in struct bpf_perf_link, instead of creating a new "subclass" off of bpf_perf_link. This is ok as it's just two new fields, which are also going to be reused for legacy uprobe support in follow up patches. Signed-off-by: Rafael David Tinoco <rafaeldtinoco@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210912064844.3181742-1-rafaeldtinoco@gmail.com
2021-09-07libbpf: Don't crash on object files with no symbol tablesToke Høiland-Jørgensen
If libbpf encounters an ELF file that has been stripped of its symbol table, it will crash in bpf_object__add_programs() when trying to dereference the obj->efile.symbols pointer. Fix this by erroring out of bpf_object__elf_collect() if it is not able able to find the symbol table. v2: - Move check into bpf_object__elf_collect() and add nice error message Fixes: 6245947c1b3c ("libbpf: Allow gaps in BPF program sections to support overriden weak functions") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210901114812.204720-1-toke@redhat.com
2021-08-17libbpf: Add uprobe ref counter offset support for USDT semaphoresAndrii Nakryiko
When attaching to uprobes through perf subsystem, it's possible to specify offset of a so-called USDT semaphore, which is just a reference counted u16, used by kernel to keep track of how many tracers are attached to a given location. Support for this feature was added in [0], so just wire this through uprobe_opts. This is important to enable implementing USDT attachment and tracing through libbpf's bpf_program__attach_uprobe_opts() API. [0] a6ca88b241d5 ("trace_uprobe: support reference counter in fd-based uprobe") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210815070609.987780-16-andrii@kernel.org
2021-08-17libbpf: Add bpf_cookie to perf_event, kprobe, uprobe, and tp attach APIsAndrii Nakryiko
Wire through bpf_cookie for all attach APIs that use perf_event_open under the hood: - for kprobes, extend existing bpf_kprobe_opts with bpf_cookie field; - for perf_event, uprobe, and tracepoint APIs, add their _opts variants and pass bpf_cookie through opts. For kernel that don't support BPF_LINK_CREATE for perf_events, and thus bpf_cookie is not supported either, return error and log warning for user. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210815070609.987780-12-andrii@kernel.org
2021-08-17libbpf: Use BPF perf link when supported by kernelAndrii Nakryiko
Detect kernel support for BPF perf link and prefer it when attaching to perf_event, tracepoint, kprobe/uprobe. Underlying perf_event FD will be kept open until BPF link is destroyed, at which point both perf_event FD and BPF link FD will be closed. This preserves current behavior in which perf_event FD is open for the duration of bpf_link's lifetime and user is able to "disconnect" bpf_link from underlying FD (with bpf_link__disconnect()), so that bpf_link__destroy() doesn't close underlying perf_event FD.When BPF perf link is used, disconnect will keep both perf_event and bpf_link FDs open, so it will be up to (advanced) user to close them. This approach is demonstrated in bpf_cookie.c selftests, added in this patch set. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210815070609.987780-10-andrii@kernel.org
2021-08-17libbpf: Remove unused bpf_link's destroy operation, but add deallocAndrii Nakryiko
bpf_link->destroy() isn't used by any code, so remove it. Instead, add ability to override deallocation procedure, with default doing plain free(link). This is necessary for cases when we want to "subclass" struct bpf_link to keep extra information, as is the case in the next patch adding struct bpf_link_perf. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210815070609.987780-9-andrii@kernel.org
2021-08-13libbpf: Support weak typed ksyms.Hao Luo
Currently weak typeless ksyms have default value zero, when they don't exist in the kernel. However, weak typed ksyms are rejected by libbpf if they can not be resolved. This means that if a bpf object contains the declaration of a nonexistent weak typed ksym, it will be rejected even if there is no program that references the symbol. Nonexistent weak typed ksyms can also default to zero just like typeless ones. This allows programs that access weak typed ksyms to be accepted by verifier, if the accesses are guarded. For example, extern const int bpf_link_fops3 __ksym __weak; /* then in BPF program */ if (&bpf_link_fops3) { /* use bpf_link_fops3 */ } If actual use of nonexistent typed ksym is not guarded properly, verifier would see that register is not PTR_TO_BTF_ID and wouldn't allow to use it for direct memory reads or passing it to BPF helpers. Signed-off-by: Hao Luo <haoluo@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210812003819.2439037-1-haoluo@google.com
2021-07-30libbpf: Add btf__load_vmlinux_btf/btf__load_module_btfHengqi Chen
Add two new APIs: btf__load_vmlinux_btf and btf__load_module_btf. btf__load_vmlinux_btf is just an alias to the existing API named libbpf_find_kernel_btf, rename to be more precisely and consistent with existing BTF APIs. btf__load_module_btf can be used to load module BTF, add it for completeness. These two APIs are useful for implementing tracing tools and introspection tools. This is part of the effort towards libbpf 1.0 ([0]). [0] Closes: https://github.com/libbpf/libbpf/issues/280 Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210730114012.494408-1-hengqi.chen@gmail.com
2021-07-29libbpf: Rename btf__get_from_id() as btf__load_from_kernel_by_id()Quentin Monnet
Rename function btf__get_from_id() as btf__load_from_kernel_by_id() to better indicate what the function does. Change the new function so that, instead of requiring a pointer to the pointer to update and returning with an error code, it takes a single argument (the id of the BTF object) and returns the corresponding pointer. This is more in line with the existing constructors. The other tools calling the (soon-to-be) deprecated btf__get_from_id() function will be updated in a future commit. References: - https://github.com/libbpf/libbpf/issues/278 - https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0#btfh-apis Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210729162028.29512-4-quentin@isovalent.com
2021-07-29libbpf: Rename btf__load() as btf__load_into_kernel()Quentin Monnet
As part of the effort to move towards a v1.0 for libbpf, rename btf__load() function, used to "upload" BTF information into the kernel, as btf__load_into_kernel(). This new name better reflects what the function does. References: - https://github.com/libbpf/libbpf/issues/278 - https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0#btfh-apis Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210729162028.29512-3-quentin@isovalent.com
2021-07-29libbpf: Return non-null error on failures in libbpf_find_prog_btf_id()Quentin Monnet
Variable "err" is initialised to -EINVAL so that this error code is returned when something goes wrong in libbpf_find_prog_btf_id(). However, a recent change in the function made use of the variable in such a way that it is set to 0 if retrieving linear information on the program is successful, and this 0 value remains if we error out on failures at later stages. Let's fix this by setting err to -EINVAL later in the function. Fixes: e9fc3ce99b34 ("libbpf: Streamline error reporting for high-level APIs") Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210729162028.29512-2-quentin@isovalent.com
2021-07-27libbpf: Fix race when pinning maps in parallelMartynas Pumputis
When loading in parallel multiple programs which use the same to-be pinned map, it is possible that two instances of the loader will call bpf_object__create_maps() at the same time. If the map doesn't exist when both instances call bpf_object__reuse_map(), then one of the instances will fail with EEXIST when calling bpf_map__pin(). Fix the race by retrying reusing a map if bpf_map__pin() returns EEXIST. The fix is similar to the one in iproute2: e4c4685fd6e4 ("bpf: Fix race condition with map pinning"). Before retrying the pinning, we don't do any special cleaning of an internal map state. The closer code inspection revealed that it's not required: - bpf_object__create_map(): map->inner_map is destroyed after a successful call, map->fd is closed if pinning fails. - bpf_object__populate_internal_map(): created map elements is destroyed upon close(map->fd). - init_map_slots(): slots are freed after their initialization. Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210726152001.34845-1-m@lambda.lt
2021-07-27libbpf: Fix comment typoJason Wang
Remove the repeated word 'the' in line 48. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210727115928.74600-1-wangborong@cdjrlc.com
2021-07-26libbpf: Split CO-RE logic into relo_core.c.Alexei Starovoitov
Move CO-RE logic into separate file. The internal interface between libbpf and CO-RE is through bpf_core_apply_relo_insn() function and few structs defined in relo_core.h. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210721000822.40958-5-alexei.starovoitov@gmail.com
2021-07-26libbpf: Move CO-RE types into relo_core.h.Alexei Starovoitov
In order to make a clean split of CO-RE logic move its types into independent header file. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210721000822.40958-4-alexei.starovoitov@gmail.com
2021-07-26libbpf: Split bpf_core_apply_relo() into bpf_program independent helper.Alexei Starovoitov
bpf_core_apply_relo() doesn't need to know bpf_program internals and hashmap details. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210721000822.40958-3-alexei.starovoitov@gmail.com
2021-07-26libbpf: Cleanup the layering between CORE and bpf_program.Alexei Starovoitov
CO-RE processing functions don't need to know 'struct bpf_program' details. Cleanup the layering to eventually be able to move CO-RE logic into a separate file. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210721000822.40958-2-alexei.starovoitov@gmail.com
2021-07-23libbpf: Add bpf_map__pin_path functionEvgeniy Litvinenko
Add bpf_map__pin_path, so that the inconsistently named bpf_map__get_pin_path can be deprecated later. This is part of the effort towards libbpf v1.0: https://github.com/libbpf/libbpf/issues/307 Also, add a selftest for the new function. Signed-off-by: Evgeniy Litvinenko <evgeniyl@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210723221511.803683-1-evgeniyl@fb.com
2021-07-22libbpf: Export bpf_program__attach_kprobe_opts functionJiri Olsa
Export bpf_program__attach_kprobe_opts as a public API. Rename bpf_program_attach_kprobe_opts to bpf_kprobe_opts and turn it into OPTS struct. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Tested-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20210721215810.889975-4-jolsa@kernel.org
2021-07-22libbpf: Allow decimal offset for kprobesJiri Olsa
Allow to specify decimal offset in SEC macro, like: SEC("kprobe/bpf_fentry_test7+5") Add selftest for that. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Tested-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20210721215810.889975-3-jolsa@kernel.org
2021-07-22libbpf: Fix func leak in attach_kprobeJiri Olsa
Add missing free() for func pointer in attach_kprobe function. Fixes: a2488b5f483f ("libbpf: Allow specification of "kprobe/function+offset"") Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Tested-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20210721215810.889975-2-jolsa@kernel.org
2021-07-19libbpf: Fix removal of inner map in bpf_object__create_mapMartynas Pumputis
If creating an outer map of a BTF-defined map-in-map fails (via bpf_object__create_map()), then the previously created its inner map won't be destroyed. Fix this by ensuring that the destroy routines are not bypassed in the case of a failure. Fixes: 646f02ffdd49c ("libbpf: Add BTF-defined map-in-map support") Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210719173838.423148-2-m@lambda.lt
2021-07-16libbpf: Fix the possible memory leak on errorShuyi Cheng
If the strdup() fails then we need to call bpf_object__close(obj) to avoid a resource leak. Fixes: 166750bc1dd2 ("libbpf: Support libbpf-provided extern variables") Signed-off-by: Shuyi Cheng <chengshuyi@linux.alibaba.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1626180159-112996-3-git-send-email-chengshuyi@linux.alibaba.com
2021-07-16libbpf: Introduce 'btf_custom_path' to 'bpf_obj_open_opts'Shuyi Cheng
btf_custom_path allows developers to load custom BTF which libbpf will subsequently use for CO-RE relocation instead of vmlinux BTF. Having btf_custom_path in bpf_object_open_opts one can directly use the skeleton's <objname>_bpf__open_opts() API to pass in the btf_custom_path parameter, as opposed to using bpf_object__load_xattr() which is slated to be deprecated ([0]). This work continues previous work started by another developer ([1]). [0] https://lore.kernel.org/bpf/CAEf4BzbJZLjNoiK8_VfeVg_Vrg=9iYFv+po-38SMe=UzwDKJ=Q@mail.gmail.com/#t [1] https://yhbt.net/lore/all/CAEf4Bzbgw49w2PtowsrzKQNcxD4fZRE6AKByX-5-dMo-+oWHHA@mail.gmail.com/ Signed-off-by: Shuyi Cheng <chengshuyi@linux.alibaba.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1626180159-112996-2-git-send-email-chengshuyi@linux.alibaba.com
2021-07-15Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Alexei Starovoitov says: ==================== pull-request: bpf-next 2021-07-15 The following pull-request contains BPF updates for your *net-next* tree. We've added 45 non-merge commits during the last 15 day(s) which contain a total of 52 files changed, 3122 insertions(+), 384 deletions(-). The main changes are: 1) Introduce bpf timers, from Alexei. 2) Add sockmap support for unix datagram socket, from Cong. 3) Fix potential memleak and UAF in the verifier, from He. 4) Add bpf_get_func_ip helper, from Jiri. 5) Improvements to generic XDP mode, from Kumar. 6) Support for passing xdp_md to XDP programs in bpf_prog_run, from Zvi. =================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-07-15libbpf: Allow specification of "kprobe/function+offset"Alan Maguire
kprobes can be placed on most instructions in a function, not just entry, and ftrace and bpftrace support the function+offset notification for probe placement. Adding parsing of func_name into func+offset to bpf_program__attach_kprobe() allows the user to specify SEC("kprobe/bpf_fentry_test5+0x6") ...for example, and the offset can be passed to perf_event_open_probe() to support kprobe attachment. Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210714094400.396467-8-jolsa@kernel.org
2021-07-15libbpf: Add bpf_program__attach_kprobe_opts functionJiri Olsa
Adding bpf_program__attach_kprobe_opts that does the same as bpf_program__attach_kprobe, but takes opts argument. Currently opts struct holds just retprobe bool, but we will add new field in following patch. The function is not exported, so there's no need to add size to the struct bpf_program_attach_kprobe_opts for now. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210714094400.396467-7-jolsa@kernel.org
2021-07-12libbpf: Fix reuse of pinned map on older kernelMartynas Pumputis
When loading a BPF program with a pinned map, the loader checks whether the pinned map can be reused, i.e. their properties match. To derive such of the pinned map, the loader invokes BPF_OBJ_GET_INFO_BY_FD and then does the comparison. Unfortunately, on < 4.12 kernels the BPF_OBJ_GET_INFO_BY_FD is not available, so loading the program fails with the following error: libbpf: failed to get map info for map FD 5: Invalid argument libbpf: couldn't reuse pinned map at '/sys/fs/bpf/tc/globals/cilium_call_policy': parameter mismatch" libbpf: map 'cilium_call_policy': error reusing pinned map libbpf: map 'cilium_call_policy': failed to create: Invalid argument(-22) libbpf: failed to load object 'bpf_overlay.o' To fix this, fallback to derivation of the map properties via /proc/$PID/fdinfo/$MAP_FD if BPF_OBJ_GET_INFO_BY_FD fails with EINVAL, which can be used as an indicator that the kernel doesn't support the latter. Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210712125552.58705-1-m@lambda.lt
2021-07-06libbpf: Restore errno return for functions that were already returning itToke Høiland-Jørgensen
The update to streamline libbpf error reporting intended to change all functions to return the errno as a negative return value if LIBBPF_STRICT_DIRECT_ERRS is set. However, if the flag is *not* set, the return value changes for the two functions that were already returning a negative errno unconditionally: bpf_link__unpin() and perf_buffer__poll(). This is a user-visible API change that breaks applications; so let's revert these two functions back to unconditionally returning a negative errno value. Fixes: e9fc3ce99b34 ("libbpf: Streamline error reporting for high-level APIs") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210706122355.236082-1-toke@redhat.com
2021-06-21libbpf: Add extra BPF_PROG_TYPE check to bpf_object__probe_loadingJonathan Edwards
eBPF has been backported for RHEL 7 w/ kernel 3.10-940+ [0]. However only the following program types are supported [1]: BPF_PROG_TYPE_KPROBE BPF_PROG_TYPE_TRACEPOINT BPF_PROG_TYPE_PERF_EVENT For libbpf this causes an EINVAL return during the bpf_object__probe_loading call which only checks to see if programs of type BPF_PROG_TYPE_SOCKET_FILTER can load. The following will try BPF_PROG_TYPE_TRACEPOINT as a fallback attempt before erroring out. BPF_PROG_TYPE_KPROBE was not a good candidate because on some kernels it requires knowledge of the LINUX_VERSION_CODE. [0] https://www.redhat.com/en/blog/introduction-ebpf-red-hat-enterprise-linux-7 [1] https://access.redhat.com/articles/3550581 Signed-off-by: Jonathan Edwards <jonathan.edwards@165gc.onmicrosoft.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210619151007.GA6963@165gc.onmicrosoft.com
2021-06-17Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf-next 2021-06-17 The following pull-request contains BPF updates for your *net-next* tree. We've added 50 non-merge commits during the last 25 day(s) which contain a total of 148 files changed, 4779 insertions(+), 1248 deletions(-). The main changes are: 1) BPF infrastructure to migrate TCP child sockets from a listener to another in the same reuseport group/map, from Kuniyuki Iwashima. 2) Add a provably sound, faster and more precise algorithm for tnum_mul() as noted in https://arxiv.org/abs/2105.05398, from Harishankar Vishwanathan. 3) Streamline error reporting changes in libbpf as planned out in the 'libbpf: the road to v1.0' effort, from Andrii Nakryiko. 4) Add broadcast support to xdp_redirect_map(), from Hangbin Liu. 5) Extends bpf_map_lookup_and_delete_elem() functionality to 4 more map types, that is, {LRU_,PERCPU_,LRU_PERCPU_,}HASH, from Denis Salopek. 6) Support new LLVM relocations in libbpf to make them more linker friendly, also add a doc to describe the BPF backend relocations, from Yonghong Song. 7) Silence long standing KUBSAN complaints on register-based shifts in interpreter, from Daniel Borkmann and Eric Biggers. 8) Add dummy PT_REGS macros in libbpf to fail BPF program compilation when target arch cannot be determined, from Lorenz Bauer. 9) Extend AF_XDP to support large umems with 1M+ pages, from Magnus Karlsson. 10) Fix two minor libbpf tc BPF API issues, from Kumar Kartikeya Dwivedi. 11) Move libbpf BPF_SEQ_PRINTF/BPF_SNPRINTF macros that can be used by BPF programs to bpf_helpers.h header, from Florent Revest. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-15libbpf: Set expected_attach_type for BPF_PROG_TYPE_SK_REUSEPORT.Kuniyuki Iwashima
This commit introduces a new section (sk_reuseport/migrate) and sets expected_attach_type to two each section in BPF_PROG_TYPE_SK_REUSEPORT program. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20210612123224.12525-11-kuniyu@amazon.co.jp
2021-06-11libbpf: Simplify the return expression of bpf_object__init_maps functionWang Hai
There is no need for special treatment of the 'ret == 0' case. This patch simplifies the return expression. Signed-off-by: Wang Hai <wanghai38@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210609115651.3392580-1-wanghai38@huawei.com
2021-06-08libbpf: Fix pr_warn type warnings on 32bitMichal Suchanek
The printed value is ptrdiff_t and is formatted wiht %ld. This works on 64bit but produces a warning on 32bit. Fix the format specifier to %td. Fixes: 67234743736a ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210604112448.32297-1-msuchanek@suse.de
2021-05-27Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
cdc-wdm: s/kill_urbs/poison_urbs/ to fix build Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-05-25libbpf: Streamline error reporting for high-level APIsAndrii Nakryiko
Implement changes to error reporting for high-level libbpf APIs to make them less surprising and less error-prone to users: - in all the cases when error happens, errno is set to an appropriate error value; - in libbpf 1.0 mode, all pointer-returning APIs return NULL on error and error code is communicated through errno; this applies both to APIs that already returned NULL before (so now they communicate more detailed error codes), as well as for many APIs that used ERR_PTR() macro and encoded error numbers as fake pointers. - in legacy (default) mode, those APIs that were returning ERR_PTR(err), continue doing so, but still set errno. With these changes, errno can be always used to extract actual error, regardless of legacy or libbpf 1.0 modes. This is utilized internally in libbpf in places where libbpf uses it's own high-level APIs. libbpf_get_error() is adapted to handle both cases completely transparently to end-users (and is used by libbpf consistently as well). More context, justification, and discussion can be found in "Libbpf: the road to v1.0" document ([0]). [0] https://docs.google.com/document/d/1UyjTZuPFWiPFyKk1tV5an11_iaRuec6U-ZESZ54nNTY Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-5-andrii@kernel.org
2021-05-25libbpf: Add libbpf_set_strict_mode() API to turn on libbpf 1.0 behaviorsAndrii Nakryiko
Add libbpf_set_strict_mode() API that allows application to simulate libbpf 1.0 breaking changes before libbpf 1.0 is released. This will help users migrate gradually and with confidence. For now only ALL or NONE options are available, subsequent patches will add more flags. This patch is preliminary for selftests/bpf changes. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-2-andrii@kernel.org
2021-05-24libbpf: Skip bpf_object__probe_loading for light skeletonStanislav Fomichev
I'm getting the following error when running 'gen skeleton -L' as regular user: libbpf: Error in bpf_object__probe_loading():Operation not permitted(1). Couldn't load trivial BPF program. Make sure your kernel supports BPF (CONFIG_BPF_SYSCALL=y) and/or that RLIMIT_MEMLOCK is set to big enough value. Fixes: 67234743736a ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210521030653.2626513-1-sdf@google.com
2021-05-19libbpf: Introduce bpf_map__initial_value().Alexei Starovoitov
Introduce bpf_map__initial_value() to read initial contents of mmaped data/rodata/bss maps. Note that bpf_map__set_initial_value() doesn't allow modifying kconfig map while bpf_map__initial_value() allows reading its values. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-17-alexei.starovoitov@gmail.com
2021-05-19libbpf: Generate loader program out of BPF ELF file.Alexei Starovoitov
The BPF program loading process performed by libbpf is quite complex and consists of the following steps: "open" phase: - parse elf file and remember relocations, sections - collect externs and ksyms including their btf_ids in prog's BTF - patch BTF datasec (since llvm couldn't do it) - init maps (old style map_def, BTF based, global data map, kconfig map) - collect relocations against progs and maps "load" phase: - probe kernel features - load vmlinux BTF - resolve externs (kconfig and ksym) - load program BTF - init struct_ops - create maps - apply CO-RE relocations - patch ld_imm64 insns with src_reg=PSEUDO_MAP, PSEUDO_MAP_VALUE, PSEUDO_BTF_ID - reposition subprograms and adjust call insns - sanitize and load progs During this process libbpf does sys_bpf() calls to load BTF, create maps, populate maps and finally load programs. Instead of actually doing the syscalls generate a trace of what libbpf would have done and represent it as the "loader program". The "loader program" consists of single map with: - union bpf_attr(s) - BTF bytes - map value bytes - insns bytes and single bpf program that passes bpf_attr(s) and data into bpf_sys_bpf() helper. Executing such "loader program" via bpf_prog_test_run() command will replay the sequence of syscalls that libbpf would have done which will result the same maps created and programs loaded as specified in the elf file. The "loader program" removes libelf and majority of libbpf dependency from program loading process. kconfig, typeless ksym, struct_ops and CO-RE are not supported yet. The order of relocate_data and relocate_calls had to change, so that bpf_gen__prog_load() can see all relocations for a given program with correct insn_idx-es. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-15-alexei.starovoitov@gmail.com
2021-05-19libbpf: Preliminary support for fd_idxAlexei Starovoitov
Prep libbpf to use FD_IDX kernel feature when generating loader program. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-14-alexei.starovoitov@gmail.com
2021-05-19libbpf: Add bpf_object pointer to kernel_supports().Alexei Starovoitov
Add a pointer to 'struct bpf_object' to kernel_supports() helper. It will be used in the next patch. No functional changes. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-13-alexei.starovoitov@gmail.com
2021-05-19libbpf: Change the order of data and text relocations.Alexei Starovoitov
In order to be able to generate loader program in the later patches change the order of data and text relocations. Also improve the test to include data relos. If the kernel supports "FD array" the map_fd relocations can be processed before text relos since generated loader program won't need to manually patch ld_imm64 insns with map_fd. But ksym and kfunc relocations can only be processed after all calls are relocated, since loader program will consist of a sequence of calls to bpf_btf_find_by_name_kind() followed by patching of btf_id and btf_obj_fd into corresponding ld_imm64 insns. The locations of those ld_imm64 insns are specified in relocations. Hence process all data relocations (maps, ksym, kfunc) together after call relos. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-12-alexei.starovoitov@gmail.com
2021-05-19libbpf: Support for syscall program typeAlexei Starovoitov
Trivial support for syscall program type. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-5-alexei.starovoitov@gmail.com
2021-05-14libbpf: Reject static entry-point BPF programsAndrii Nakryiko
Detect use of static entry-point BPF programs (those with SEC() markings) and emit error message. This is similar to c1cccec9c636 ("libbpf: Reject static maps") but for BPF programs. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20210514195534.1440970-1-andrii@kernel.org
2021-05-13libbpf: Reject static mapsAndrii Nakryiko
Static maps never really worked with libbpf, because all such maps were always silently resolved to the very first map. Detect static maps (both legacy and BTF-defined) and report user-friendly error. Tested locally by switching few maps (legacy and BTF-defined) in selftests to static ones and verifying that now libbpf rejects them loudly. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210513233643.194711-2-andrii@kernel.org
2021-05-11libbpf: Treat STV_INTERNAL same as STV_HIDDEN for functionsAndrii Nakryiko
Do the same global -> static BTF update for global functions with STV_INTERNAL visibility to turn on static BPF verification mode. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210507054119.270888-7-andrii@kernel.org
2021-05-05libbpf: Add NULL check to add_dummy_ksym_varIan Rogers
Avoids a segv if btf isn't present. Seen on the call path __bpf_object__open calling bpf_object__collect_externs. Fixes: 5bd022ec01f0 (libbpf: Support extern kernel function) Suggested-by: Stanislav Fomichev <sdf@google.com> Suggested-by: Petar Penkov <ppenkov@google.com> Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210504234910.976501-1-irogers@google.com
2021-04-26libbpf: Support BTF_KIND_FLOAT during type compatibility checks in CO-REAndrii Nakryiko
Add BTF_KIND_FLOAT support when doing CO-RE field type compatibility check. Without this, relocations against float/double fields will fail. Also adjust one error message to emit instruction index instead of less convenient instruction byte offset. Fixes: 22541a9eeb0d ("libbpf: Add BTF_KIND_FLOAT support") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Lorenz Bauer <lmb@cloudflare.com> Link: https://lore.kernel.org/bpf/20210426192949.416837-3-andrii@kernel.org
2021-04-23libbpf: Make few internal helpers available outside of libbpf.cAndrii Nakryiko
Make skip_mods_and_typedefs(), btf_kind_str(), and btf_func_linkage() helpers available outside of libbpf.c, to be used by static linker code. Also do few cleanups (error code fixes, comment clean up, etc) that don't deserve their own commit. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210423181348.1801389-9-andrii@kernel.org