summaryrefslogtreecommitdiff
path: root/tools/lib/bpf/relo_core.c
AgeCommit message (Collapse)Author
2021-12-14libbpf: Fix potential uninit memory readAndrii Nakryiko
In case of BPF_CORE_TYPE_ID_LOCAL we fill out target result explicitly. But targ_res itself isn't initialized in such a case, and subsequent call to bpf_core_patch_insn() might read uninitialized field (like fail_memsz_adjust in this case). So ensure that targ_res is zero-initialized for BPF_CORE_TYPE_ID_LOCAL case. This was reported by Coverity static analyzer. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211214010032.3843804-1-andrii@kernel.org
2021-12-12libbpf: Don't validate TYPE_ID relo's original imm valueAndrii Nakryiko
During linking, type IDs in the resulting linked BPF object file can change, and so ldimm64 instructions corresponding to BPF_CORE_TYPE_ID_TARGET and BPF_CORE_TYPE_ID_LOCAL CO-RE relos can get their imm value out of sync with actual CO-RE relocation information that's updated by BPF linker properly during linking process. We could teach BPF linker to adjust such instructions, but it feels a bit too much for linker to re-implement good chunk of bpf_core_patch_insns logic just for this. This is a redundant safety check for TYPE_ID relocations, as the real validation is in matching CO-RE specs, so if that works fine, it's very unlikely that there is something wrong with the instruction itself. So, instead, teach libbpf (and kernel) to ignore insn->imm for BPF_CORE_TYPE_ID_TARGET and BPF_CORE_TYPE_ID_LOCAL relos. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211213010706.100231-1-andrii@kernel.org
2021-12-03libbpf: Reduce bpf_core_apply_relo_insn() stack usage.Alexei Starovoitov
Reduce bpf_core_apply_relo_insn() stack usage and bump BPF_CORE_SPEC_MAX_LEN limit back to 64. Fixes: 29db4bea1d10 ("bpf: Prepare relo_core.c for kernel duty.") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211203182836.16646-1-alexei.starovoitov@gmail.com
2021-12-02bpf: Define enum bpf_core_relo_kind as uapi.Alexei Starovoitov
enum bpf_core_relo_kind is generated by llvm and processed by libbpf. It's a de-facto uapi. With CO-RE in the kernel the bpf_core_relo_kind values become uapi de-jure. Also rename them with BPF_CORE_ prefix to distinguish from conflicting names in bpf_core_read.h. The enums bpf_field_info_kind, bpf_type_id_kind, bpf_type_info_kind, bpf_enum_value_kind are passing different values from bpf program into llvm. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-5-alexei.starovoitov@gmail.com
2021-12-02bpf: Prepare relo_core.c for kernel duty.Alexei Starovoitov
Make relo_core.c to be compiled for the kernel and for user space libbpf. Note the patch is reducing BPF_CORE_SPEC_MAX_LEN from 64 to 32. This is the maximum number of nested structs and arrays. For example: struct sample { int a; struct { int b[10]; }; }; struct sample *s = ...; int *y = &s->b[5]; This field access is encoded as "0:1:0:5" and spec len is 4. The follow up patch might bump it back to 64. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-4-alexei.starovoitov@gmail.com
2021-12-02libbpf: Replace btf__type_by_id() with btf_type_by_id().Alexei Starovoitov
To prepare relo_core.c to be compiled in the kernel and the user space replace btf__type_by_id with btf_type_by_id. In libbpf btf__type_by_id and btf_type_by_id have different behavior. bpf_core_apply_relo_insn() needs behavior of uapi btf__type_by_id vs internal btf_type_by_id, but type_id range check is already done in bpf_core_apply_relo(), so it's safe to replace it everywhere. The kernel btf_type_by_id() does the check anyway. It doesn't hurt. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-2-alexei.starovoitov@gmail.com
2021-10-25libbpf: Use __BYTE_ORDER__Ilya Leoshkevich
Use the compiler-defined __BYTE_ORDER__ instead of the libc-defined __BYTE_ORDER for consistency. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211026010831.748682-3-iii@linux.ibm.com
2021-07-26libbpf: Split CO-RE logic into relo_core.c.Alexei Starovoitov
Move CO-RE logic into separate file. The internal interface between libbpf and CO-RE is through bpf_core_apply_relo_insn() function and few structs defined in relo_core.h. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210721000822.40958-5-alexei.starovoitov@gmail.com