summaryrefslogtreecommitdiff
path: root/tools/lib/bpf/libbpf.c
AgeCommit message (Collapse)Author
2019-10-21libbpf: Add bpf_program__get_{type, expected_attach_type) APIsAndrii Nakryiko
There are bpf_program__set_type() and bpf_program__set_expected_attach_type(), but no corresponding getters, which seems rather incomplete. Fix this. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191021033902.3856966-3-andriin@fb.com
2019-10-21tools, bpf: Rename pr_warning to pr_warn to align with kernel loggingKefeng Wang
For kernel logging macros, pr_warning() is completely removed and replaced by pr_warn(). By using pr_warn() in tools/lib/bpf/ for symmetry to kernel logging macros, we could eventually drop the use of pr_warning() in the whole kernel tree. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191021055532.185245-1-wangkefeng.wang@huawei.com
2019-10-18bpf, libbpf: Add kernel version section parsing backJohn Fastabend
With commit "libbpf: stop enforcing kern_version,..." we removed the kernel version section parsing in favor of querying for the kernel using uname() and populating the version using the result of the query. After this any version sections were simply ignored. Unfortunately, the world of kernels is not so friendly. I've found some customized kernels where uname() does not match the in kernel version. To fix this so programs can load in this environment this patch adds back parsing the section and if it exists uses the user specified kernel version to override the uname() result. However, keep most the kernel uname() discovery bits so users are not required to insert the version except in these odd cases. Fixes: 5e61f27070292 ("libbpf: stop enforcing kern_version, populate it for users") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/157140968634.9073.6407090804163937103.stgit@john-XPS-13-9370
2019-10-17libbpf: Auto-detect btf_id of BTF-based raw_tracepointsAlexei Starovoitov
It's a responsiblity of bpf program author to annotate the program with SEC("tp_btf/name") where "name" is a valid raw tracepoint. The libbpf will try to find "name" in vmlinux BTF and error out in case vmlinux BTF is not available or "name" is not found. If "name" is indeed a valid raw tracepoint then in-kernel BTF will have "btf_trace_##name" typedef that points to function prototype of that raw tracepoint. BTF description captures exact argument the kernel C code is passing into raw tracepoint. The kernel verifier will check the types while loading bpf program. libbpf keeps BTF type id in expected_attach_type, but since kernel ignores this attribute for tracing programs copy it into attach_btf_id attribute before loading. Later the kernel will use prog->attach_btf_id to select raw tracepoint during bpf_raw_tracepoint_open syscall command. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191016032505.2089704-6-ast@kernel.org
2019-10-15libbpf: Add support for field existance CO-RE relocationAndrii Nakryiko
Add support for BPF_FRK_EXISTS relocation kind to detect existence of captured field in a destination BTF, allowing conditional logic to handle incompatible differences between kernels. Also introduce opt-in relaxed CO-RE relocation handling option, which makes libbpf emit warning for failed relocations, but proceed with other relocations. Instruction, for which relocation failed, is patched with (u32)-1 value. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191015182849.3922287-4-andriin@fb.com
2019-10-15libbpf: Refactor bpf_object__open APIs to use common optsAndrii Nakryiko
Refactor all the various bpf_object__open variations to ultimately specify common bpf_object_open_opts struct. This makes it easy to keep extending this common struct w/ extra parameters without having to update all the legacy APIs. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191015182849.3922287-3-andriin@fb.com
2019-10-15libbpf: Update BTF reloc support to latest Clang formatAndrii Nakryiko
BTF offset reloc was generalized in recent Clang into field relocation, capturing extra u32 field, specifying what aspect of captured field needs to be relocated. This changes .BTF.ext's record size for this relocation from 12 bytes to 16 bytes. Given these format changes happened in Clang before official released version, it's ok to not support outdated 12-byte record size w/o breaking ABI. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191015182849.3922287-2-andriin@fb.com
2019-10-05libbpf: fix bpf_object__name() to actually return object nameAndrii Nakryiko
bpf_object__name() was returning file path, not name. Fix this. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-10-05libbpf: add bpf_object__open_{file, mem} w/ extensible optsAndrii Nakryiko
Add new set of bpf_object__open APIs using new approach to optional parameters extensibility allowing simpler ABI compatibility approach. This patch demonstrates an approach to implementing libbpf APIs that makes it easy to extend existing APIs with extra optional parameters in such a way, that ABI compatibility is preserved without having to do symbol versioning and generating lots of boilerplate code to handle it. To facilitate succinct code for working with options, add OPTS_VALID, OPTS_HAS, and OPTS_GET macros that hide all the NULL, size, and zero checks. Additionally, newly added libbpf APIs are encouraged to follow similar pattern of having all mandatory parameters as formal function parameters and always have optional (NULL-able) xxx_opts struct, which should always have real struct size as a first field and the rest would be optional parameters added over time, which tune the behavior of existing API, if specified by user. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-10-05libbpf: stop enforcing kern_version, populate it for usersAndrii Nakryiko
Kernel version enforcement for kprobes/kretprobes was removed from 5.0 kernel in 6c4fc209fcf9 ("bpf: remove useless version check for prog load"). Since then, BPF programs were specifying SEC("version") just to please libbpf. We should stop enforcing this in libbpf, if even kernel doesn't care. Furthermore, libbpf now will pre-populate current kernel version of the host system, in case we are still running on old kernel. This patch also removes __bpf_object__open_xattr from libbpf.h, as nothing in libbpf is relying on having it in that header. That function was never exported as LIBBPF_API and even name suggests its internal version. So this should be safe to remove, as it doesn't break ABI. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-08-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller
Merge conflict of mlx5 resolved using instructions in merge commit 9566e650bf7fdf58384bb06df634f7531ca3a97e. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-08-13Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski
Daniel Borkmann says: ==================== The following pull-request contains BPF updates for your *net-next* tree. There is a small merge conflict in libbpf (Cc Andrii so he's in the loop as well): for (i = 1; i <= btf__get_nr_types(btf); i++) { t = (struct btf_type *)btf__type_by_id(btf, i); if (!has_datasec && btf_is_var(t)) { /* replace VAR with INT */ t->info = BTF_INFO_ENC(BTF_KIND_INT, 0, 0); <<<<<<< HEAD /* * using size = 1 is the safest choice, 4 will be too * big and cause kernel BTF validation failure if * original variable took less than 4 bytes */ t->size = 1; *(int *)(t+1) = BTF_INT_ENC(0, 0, 8); } else if (!has_datasec && kind == BTF_KIND_DATASEC) { ======= t->size = sizeof(int); *(int *)(t + 1) = BTF_INT_ENC(0, 0, 32); } else if (!has_datasec && btf_is_datasec(t)) { >>>>>>> 72ef80b5ee131e96172f19e74b4f98fa3404efe8 /* replace DATASEC with STRUCT */ Conflict is between the two commits 1d4126c4e119 ("libbpf: sanitize VAR to conservative 1-byte INT") and b03bc6853c0e ("libbpf: convert libbpf code to use new btf helpers"), so we need to pick the sanitation fixup as well as use the new btf_is_datasec() helper and the whitespace cleanup. Looks like the following: [...] if (!has_datasec && btf_is_var(t)) { /* replace VAR with INT */ t->info = BTF_INFO_ENC(BTF_KIND_INT, 0, 0); /* * using size = 1 is the safest choice, 4 will be too * big and cause kernel BTF validation failure if * original variable took less than 4 bytes */ t->size = 1; *(int *)(t + 1) = BTF_INT_ENC(0, 0, 8); } else if (!has_datasec && btf_is_datasec(t)) { /* replace DATASEC with STRUCT */ [...] The main changes are: 1) Addition of core parts of compile once - run everywhere (co-re) effort, that is, relocation of fields offsets in libbpf as well as exposure of kernel's own BTF via sysfs and loading through libbpf, from Andrii. More info on co-re: http://vger.kernel.org/bpfconf2019.html#session-2 and http://vger.kernel.org/lpc-bpf2018.html#session-2 2) Enable passing input flags to the BPF flow dissector to customize parsing and allowing it to stop early similar to the C based one, from Stanislav. 3) Add a BPF helper function that allows generating SYN cookies from XDP and tc BPF, from Petar. 4) Add devmap hash-based map type for more flexibility in device lookup for redirects, from Toke. 5) Improvements to XDP forwarding sample code now utilizing recently enabled devmap lookups, from Jesper. 6) Add support for reporting the effective cgroup progs in bpftool, from Jakub and Takshak. 7) Fix reading kernel config from bpftool via /proc/config.gz, from Peter. 8) Fix AF_XDP umem pages mapping for 32 bit architectures, from Ivan. 9) Follow-up to add two more BPF loop tests for the selftest suite, from Alexei. 10) Add perf event output helper also for other skb-based program types, from Allan. 11) Fix a co-re related compilation error in selftests, from Yonghong. ==================== Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-08-13libbpf: attempt to load kernel BTF from sysfs firstAndrii Nakryiko
Add support for loading kernel BTF from sysfs (/sys/kernel/btf/vmlinux) as a target BTF. Also extend the list of on disk search paths for vmlinux ELF image with entries that perf is searching for. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-07libbpf: implement BPF CO-RE offset relocation algorithmAndrii Nakryiko
This patch implements the core logic for BPF CO-RE offsets relocations. Every instruction that needs to be relocated has corresponding bpf_offset_reloc as part of BTF.ext. Relocations are performed by trying to match recorded "local" relocation spec against potentially many compatible "target" types, creating corresponding spec. Details of the algorithm are noted in corresponding comments in the code. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-08-07libbpf: convert libbpf code to use new btf helpersAndrii Nakryiko
Simplify code by relying on newly added BTF helper functions. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-08-01libbpf: set BTF FD for prog only when there is supported .BTF.ext dataAndrii Nakryiko
5d01ab7bac46 ("libbpf: fix erroneous multi-closing of BTF FD") introduced backwards-compatibility issue, manifesting itself as -E2BIG error returned on program load due to unknown non-zero btf_fd attribute value for BPF_PROG_LOAD sys_bpf() sub-command. This patch fixes bug by ensuring that we only ever associate BTF FD with program if there is a BTF.ext data that was successfully loaded into kernel, which automatically means kernel supports func_info/line_info and associated BTF FD for progs (checked and ensured also by BTF sanitization code). Fixes: 5d01ab7bac46 ("libbpf: fix erroneous multi-closing of BTF FD") Reported-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-31libbpf : make libbpf_num_possible_cpus function thread safeTakshak Chahande
Having static variable `cpus` in libbpf_num_possible_cpus function without guarding it with mutex makes this function thread-unsafe. If multiple threads accessing this function, in the current form; it leads to incrementing the static variable value `cpus` in the multiple of total available CPUs. Used local stack variable to calculate the number of possible CPUs and then updated the static variable using WRITE_ONCE(). Changes since v1: * added stack variable to calculate cpus * serialized static variable update using WRITE_ONCE() * fixed Fixes tag Fixes: 6446b3155521 ("bpf: add a new API libbpf_num_possible_cpus()") Signed-off-by: Takshak Chahande <ctakshak@fb.com> Acked-by: Andrey Ignatov <rdna@fb.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-27libbpf: return previous print callback from libbpf_set_printAndrii Nakryiko
By returning previously set print callback from libbpf_set_print, it's possible to restore it, eventually. This is useful when running many independent test with one default print function, but overriding log verbosity for particular subset of tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-26libbpf: fix erroneous multi-closing of BTF FDAndrii Nakryiko
Libbpf stores associated BTF FD per each instance of bpf_program. When program is unloaded, that FD is closed. This is wrong, because leads to a race and possibly closing of unrelated files, if application simultaneously opens new files while bpf_programs are unloaded. It's also unnecessary, because struct btf "owns" that FD, and btf__free(), called from bpf_object__close() will close it. Thus the fix is to never have per-program BTF FD and fetch it from obj->btf, when necessary. Fixes: 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections") Reported-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-23libbpf: provide more helpful message on uninitialized global varAndrii Nakryiko
When BPF program defines uninitialized global variable, it's put into a special COMMON section. Libbpf will reject such programs, but will provide very unhelpful message with garbage-looking section index. This patch detects special section cases and gives more explicit error message. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-22libbpf: Avoid designated initializers for unnamed union membersArnaldo Carvalho de Melo
As it fails to build in some systems with: libbpf.c: In function 'perf_buffer__new': libbpf.c:4515: error: unknown field 'sample_period' specified in initializer libbpf.c:4516: error: unknown field 'wakeup_events' specified in initializer Doing as: attr.sample_period = 1; I.e. not as a designated initializer makes it build everywhere. Cc: Andrii Nakryiko <andriin@fb.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Fixes: fb84b8224655 ("libbpf: add perf buffer API") Link: https://lkml.kernel.org/n/tip-hnlmch8qit1ieksfppmr32si@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-22libbpf: Fix endianness macro usage for some compilersArnaldo Carvalho de Melo
Using endian.h and its endianness macros makes this code build in a wider range of compilers, as some don't have those macros (__BYTE_ORDER__, __ORDER_LITTLE_ENDIAN__, __ORDER_BIG_ENDIAN__), so use instead endian.h's macros (__BYTE_ORDER, __LITTLE_ENDIAN, __BIG_ENDIAN) which makes this code even shorter :-) Acked-by: Andrii Nakryiko <andriin@fb.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Fixes: 12ef5634a855 ("libbpf: simplify endianness check") Fixes: e6c64855fd7a ("libbpf: add btf__parse_elf API to load .BTF and .BTF.ext") Link: https://lkml.kernel.org/n/tip-eep5n8vgwcdphw3uc058k03u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-19libbpf: sanitize VAR to conservative 1-byte INTAndrii Nakryiko
If VAR in non-sanitized BTF was size less than 4, converting such VAR into an INT with size=4 will cause BTF validation failure due to violationg of STRUCT (into which DATASEC was converted) member size. Fix by conservatively using size=1. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-19libbpf: fix SIGSEGV when BTF loading fails, but .BTF.ext existsAndrii Nakryiko
In case when BTF loading fails despite sanitization, but BPF object has .BTF.ext loaded as well, we free and null obj->btf, but not obj->btf_ext. This leads to an attempt to relocate .BTF.ext later on during bpf_object__load(), which assumes obj->btf is present. This leads to SIGSEGV on null pointer access. Fix bug by freeing and nulling obj->btf_ext as well. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-12libbpf: fix ptr to u64 conversion warning on 32-bit platformsAndrii Nakryiko
On 32-bit platforms compiler complains about conversion: libbpf.c: In function ‘perf_event_open_probe’: libbpf.c:4112:17: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast] attr.config1 = (uint64_t)(void *)name; /* kprobe_func or uprobe_path */ ^ Reported-by: Matt Hart <matthew.hart@linaro.org> Fixes: b26500274767 ("libbpf: add kprobe/uprobe attach API") Tested-by: Matt Hart <matthew.hart@linaro.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-08libbpf: auto-set PERF_EVENT_ARRAY size to number of CPUsAndrii Nakryiko
For BPF_MAP_TYPE_PERF_EVENT_ARRAY typically correct size is number of possible CPUs. This is impossible to specify at compilation time. This change adds automatic setting of PERF_EVENT_ARRAY size to number of system CPUs, unless non-zero size is specified explicitly. This allows to adjust size for advanced specific cases, while providing convenient and logical defaults. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-08libbpf: add perf buffer APIAndrii Nakryiko
BPF_MAP_TYPE_PERF_EVENT_ARRAY map is often used to send data from BPF program to user space for additional processing. libbpf already has very low-level API to read single CPU perf buffer, bpf_perf_event_read_simple(), but it's hard to use and requires a lot of code to set everything up. This patch adds perf_buffer abstraction on top of it, abstracting setting up and polling per-CPU logic into simple and convenient API, similar to what BCC provides. perf_buffer__new() sets up per-CPU ring buffers and updates corresponding BPF map entries. It accepts two user-provided callbacks: one for handling raw samples and one for get notifications of lost samples due to buffer overflow. perf_buffer__new_raw() is similar, but provides more control over how perf events are set up (by accepting user-provided perf_event_attr), how they are handled (perf_event_header pointer is passed directly to user-provided callback), and on which CPUs ring buffers are created (it's possible to provide a list of CPUs and corresponding map keys to update). This API allows advanced users fuller control. perf_buffer__poll() is used to fetch ring buffer data across all CPUs, utilizing epoll instance. perf_buffer__free() does corresponding clean up and unsets FDs from BPF map. All APIs are not thread-safe. User should ensure proper locking/coordination if used in multi-threaded set up. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-05libbpf: capture value in BTF type info for BTF-defined map defsAndrii Nakryiko
Change BTF-defined map definitions to capture compile-time integer values as part of BTF type definition, to avoid split of key/value type information and actual type/size/flags initialization for maps. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-05libbpf: add raw tracepoint attach APIAndrii Nakryiko
Add a wrapper utilizing bpf_link "infrastructure" to allow attaching BPF programs to raw tracepoints. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Reviewed-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-05libbpf: add tracepoint attach APIAndrii Nakryiko
Allow attaching BPF programs to kernel tracepoint BPF hooks specified by category and name. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Reviewed-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-05libbpf: add kprobe/uprobe attach APIAndrii Nakryiko
Add ability to attach to kernel and user probes and retprobes. Implementation depends on perf event support for kprobes/uprobes. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Reviewed-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-05libbpf: add ability to attach/detach BPF program to perf eventAndrii Nakryiko
bpf_program__attach_perf_event allows to attach BPF program to existing perf event hook, providing most generic and most low-level way to attach BPF programs. It returns struct bpf_link, which should be passed to bpf_link__destroy to detach and free resources, associated with a link. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Reviewed-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-05libbpf: introduce concept of bpf_linkAndrii Nakryiko
bpf_link is an abstraction of an association of a BPF program and one of many possible BPF attachment points (hooks). This allows to have uniform interface for detaching BPF programs regardless of the nature of link and how it was created. Details of creation and setting up of a specific bpf_link is handled by corresponding attachment methods (bpf_program__attach_xxx) added in subsequent commits. Once successfully created, bpf_link has to be eventually destroyed with bpf_link__destroy(), at which point BPF program is disassociated from a hook and all the relevant resources are freed. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Reviewed-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-03bpf, libbpf, smatch: Fix potential NULL pointer dereferenceLeo Yan
Based on the following report from Smatch, fix the potential NULL pointer dereference check: tools/lib/bpf/libbpf.c:3493 bpf_prog_load_xattr() warn: variable dereferenced before check 'attr' (see line 3483) 3479 int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr, 3480 struct bpf_object **pobj, int *prog_fd) 3481 { 3482 struct bpf_object_open_attr open_attr = { 3483 .file = attr->file, 3484 .prog_type = attr->prog_type, ^^^^^^ 3485 }; At the head of function, it directly access 'attr' without checking if it's NULL pointer. This patch moves the values assignment after validating 'attr' and 'attr->file'. Signed-off-by: Leo Yan <leo.yan@linaro.org> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-27libbpf: support sockopt hooksStanislav Fomichev
Make libbpf aware of new sockopt hooks so it can derive prog type and hook point from the section names. Cc: Andrii Nakryiko <andriin@fb.com> Cc: Martin Lau <kafai@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-06-26libbpf: fix max() type mismatch for 32bitIvan Khoronzhuk
It fixes build error for 32bit caused by type mismatch size_t/unsigned long. Fixes: bf82927125dd ("libbpf: refactor map initialization") Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-24libbpf: fix spelling mistake "conflictling" -> "conflicting"Colin Ian King
There are several spelling mistakes in pr_warning messages. Fix these. Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-06-20Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Alexei Starovoitov says: ==================== pull-request: bpf-next 2019-06-19 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) new SO_REUSEPORT_DETACH_BPF setsocktopt, from Martin. 2) BTF based map definition, from Andrii. 3) support bpf_map_lookup_elem for xskmap, from Jonathan. 4) bounded loops and scalar precision logic in the verifier, from Alexei. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-19libbpf: constify getter APIsAndrii Nakryiko
Add const qualifiers to bpf_object/bpf_program/bpf_map arguments for getter APIs. There is no need for them to not be const pointers. Verified that make -C tools/lib/bpf make -C tools/testing/selftests/bpf make -C tools/perf all build without warnings. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-17Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Honestly all the conflicts were simple overlapping changes, nothing really interesting to report. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-18libbpf: allow specifying map definitions using BTFAndrii Nakryiko
This patch adds support for a new way to define BPF maps. It relies on BTF to describe mandatory and optional attributes of a map, as well as captures type information of key and value naturally. This eliminates the need for BPF_ANNOTATE_KV_PAIR hack and ensures key/value sizes are always in sync with the key/value type. Relying on BTF, this approach allows for both forward and backward compatibility w.r.t. extending supported map definition features. By default, any unrecognized attributes are treated as an error, but it's possible relax this using MAPS_RELAX_COMPAT flag. New attributes, added in the future will need to be optional. The outline of the new map definition (short, BTF-defined maps) is as follows: 1. All the maps should be defined in .maps ELF section. It's possible to have both "legacy" map definitions in `maps` sections and BTF-defined maps in .maps sections. Everything will still work transparently. 2. The map declaration and initialization is done through a global/static variable of a struct type with few mandatory and extra optional fields: - type field is mandatory and specified type of BPF map; - key/value fields are mandatory and capture key/value type/size information; - max_entries attribute is optional; if max_entries is not specified or initialized, it has to be provided in runtime through libbpf API before loading bpf_object; - map_flags is optional and if not defined, will be assumed to be 0. 3. Key/value fields should be **a pointer** to a type describing key/value. The pointee type is assumed (and will be recorded as such and used for size determination) to be a type describing key/value of the map. This is done to save excessive amounts of space allocated in corresponding ELF sections for key/value of big size. 4. As some maps disallow having BTF type ID associated with key/value, it's possible to specify key/value size explicitly without associating BTF type ID with it. Use key_size and value_size fields to do that (see example below). Here's an example of simple ARRAY map defintion: struct my_value { int x, y, z; }; struct { int type; int max_entries; int *key; struct my_value *value; } btf_map SEC(".maps") = { .type = BPF_MAP_TYPE_ARRAY, .max_entries = 16, }; This will define BPF ARRAY map 'btf_map' with 16 elements. The key will be of type int and thus key size will be 4 bytes. The value is struct my_value of size 12 bytes. This map can be used from C code exactly the same as with existing maps defined through struct bpf_map_def. Here's an example of STACKMAP definition (which currently disallows BTF type IDs for key/value): struct { __u32 type; __u32 max_entries; __u32 map_flags; __u32 key_size; __u32 value_size; } stackmap SEC(".maps") = { .type = BPF_MAP_TYPE_STACK_TRACE, .max_entries = 128, .map_flags = BPF_F_STACK_BUILD_ID, .key_size = sizeof(__u32), .value_size = PERF_MAX_STACK_DEPTH * sizeof(struct bpf_stack_build_id), }; This approach is naturally extended to support map-in-map, by making a value field to be another struct that describes inner map. This feature is not implemented yet. It's also possible to incrementally add features like pinning with full backwards and forward compatibility. Support for static initialization of BPF_MAP_TYPE_PROG_ARRAY using pointers to BPF programs is also on the roadmap. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-18libbpf: split initialization and loading of BTFAndrii Nakryiko
Libbpf does sanitization of BTF before loading it into kernel, if kernel doesn't support some of newer BTF features. This removes some of the important information from BTF (e.g., DATASEC and VAR description), which will be used for map construction. This patch splits BTF processing into initialization step, in which BTF is initialized from ELF and all the original data is still preserved; and sanitization/loading step, which ensures that BTF is safe to load into kernel. This allows to use full BTF information to construct maps, while still loading valid BTF into older kernels. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-18libbpf: identify maps by section index in addition to offsetAndrii Nakryiko
To support maps to be defined in multiple sections, it's important to identify map not just by offset within its section, but section index as well. This patch adds tracking of section index. For global data, we record section index of corresponding .data/.bss/.rodata ELF section for uniformity, and thus don't need a special value of offset for those maps. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-18libbpf: refactor map initializationAndrii Nakryiko
User and global data maps initialization has gotten pretty complicated and unnecessarily convoluted. This patch splits out the logic for global data map and user-defined map initialization. It also removes the restriction of pre-calculating how many maps will be initialized, instead allowing to keep adding new maps as they are discovered, which will be used later for BTF-defined map definitions. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-18libbpf: streamline ELF parsing error-handlingAndrii Nakryiko
Simplify ELF parsing logic by exiting early, as there is no common clean up path to execute. That makes it unnecessary to track when err was set and when it was cleared. It also reduces nesting in some places. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-18libbpf: extract BTF loading logicAndrii Nakryiko
As a preparation for adding BTF-based BPF map loading, extract .BTF and .BTF.ext loading logic. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-15libbpf: fix check for presence of associated BTF for map creationAndrii Nakryiko
Kernel internally checks that either key or value type ID is specified, before using btf_fd. Do the same in libbpf's map creation code for determining when to retry map creation w/o BTF. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Fixes: fba01a0689a9 ("libbpf: use negative fd to specify missing BTF") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-11bpf: add a new API libbpf_num_possible_cpus()Hechao Li
Adding a new API libbpf_num_possible_cpus() that helps user with per-CPU map operations. Signed-off-by: Hechao Li <hechaol@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-06bpf, libbpf: enable recvmsg attach typesDaniel Borkmann
Another trivial patch to libbpf in order to enable identifying and attaching programs to BPF_CGROUP_UDP{4,6}_RECVMSG by section name. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-31libbpf: Return btf_fd for load_sk_storage_btfMichal Rostecki
Before this change, function load_sk_storage_btf expected that libbpf__probe_raw_btf was returning a BTF descriptor, but in fact it was returning an information about whether the probe was successful (0 or 1). load_sk_storage_btf was using that value as an argument of the close function, which was resulting in closing stdout and thus terminating the process which called that function. That bug was visible in bpftool. `bpftool feature` subcommand was always exiting too early (because of closed stdout) and it didn't display all requested probes. `bpftool -j feature` or `bpftool -p feature` were not returning a valid json object. This change renames the libbpf__probe_raw_btf function to libbpf__load_raw_btf, which now returns a BTF descriptor, as expected in load_sk_storage_btf. v2: - Fix typo in the commit message. v3: - Simplify BTF descriptor handling in bpf_object__probe_btf_* functions. - Rename libbpf__probe_raw_btf function to libbpf__load_raw_btf and return a BTF descriptor. v4: - Fix typo in the commit message. Fixes: d7c4b3980c18 ("libbpf: detect supported kernel BTF features and sanitize BTF") Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>