diff options
author | Jakub Kicinski <kuba@kernel.org> | 2024-03-07 14:11:22 -0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2024-03-11 10:22:06 +0000 |
commit | 900b2801bf250affe410193a0d27a2ba9f2db4e5 (patch) | |
tree | df32412674788b5595094763d5d633a30e67cc23 /tools/net | |
parent | 08842c43d0165b0ed78907fd8cc92ce17d857913 (diff) |
ynl: samples: fix recycling rate calculation
Running the page-pool sample on production machines under moderate
networking load shows recycling rate higher than 100%:
$ page-pool
eth0[2] page pools: 14 (zombies: 0)
refs: 89088 bytes: 364904448 (refs: 0 bytes: 0)
recycling: 100.3% (alloc: 1392:2290247724 recycle: 469289484:1828235386)
Note that outstanding refs (89088) == slow alloc * cache size (1392 * 64)
which means this machine is recycling page pool pages perfectly, not
a single page has been released.
The extra 0.3% is because sample ignores allocations from the ptr_ring.
Treat those the same as alloc_fast, the ring vs cache alloc is
already captured accurately enough by recycling stats.
With the fix:
$ page-pool
eth0[2] page pools: 14 (zombies: 0)
refs: 89088 bytes: 364904448 (refs: 0 bytes: 0)
recycling: 100.0% (alloc: 1392:2331141604 recycle: 473625579:1857460661)
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'tools/net')
-rw-r--r-- | tools/net/ynl/samples/page-pool.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/tools/net/ynl/samples/page-pool.c b/tools/net/ynl/samples/page-pool.c index 098b5190d0e5..332f281ee5cb 100644 --- a/tools/net/ynl/samples/page-pool.c +++ b/tools/net/ynl/samples/page-pool.c @@ -95,6 +95,8 @@ int main(int argc, char **argv) if (pp->_present.alloc_fast) s->alloc_fast += pp->alloc_fast; + if (pp->_present.alloc_refill) + s->alloc_fast += pp->alloc_refill; if (pp->_present.alloc_slow) s->alloc_slow += pp->alloc_slow; if (pp->_present.recycle_ring) |