diff options
author | Alexander Lobakin <aleksander.lobakin@intel.com> | 2024-04-18 13:36:13 +0200 |
---|---|---|
committer | Tony Nguyen <anthony.l.nguyen@intel.com> | 2024-04-24 11:06:25 -0700 |
commit | e6c91556b97f855436fa45f75e69165d671012a7 (patch) | |
tree | a8699c3960a723cd6b189f50e685ca68524c7428 /drivers/net/ethernet/intel/ice/ice_main.c | |
parent | ce230f4f8981e2a7f06b71c22cc742cfe91a525d (diff) |
libeth: add Rx buffer management
Add a couple intuitive helpers to hide Rx buffer implementation details
in the library and not multiplicate it between drivers. The settings are
sorta optimized for 100G+ NICs, but nothing really HW-specific here.
Use the new page_pool_dev_alloc() to dynamically switch between
split-page and full-page modes depending on MTU, page size, required
headroom etc. For example, on x86_64 with the default driver settings
each page is shared between 2 buffers. Turning on XDP (not in this
series) -> increasing headroom requirement pushes truesize out of 2048
boundary, leading to that each buffer starts getting a full page.
The "ceiling" limit is %PAGE_SIZE, as only order-0 pages are used to
avoid compound overhead. For the above architecture, this means maximum
linear frame size of 3712 w/o XDP.
Not that &libeth_buf_queue is not a complete queue/ring structure for
now, rather a shim, but eventually the libeth-enabled drivers will move
to it, with iavf being the first one.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Diffstat (limited to 'drivers/net/ethernet/intel/ice/ice_main.c')
0 files changed, 0 insertions, 0 deletions