summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/google
diff options
context:
space:
mode:
authorHarshitha Ramamurthy <hramamurthy@google.com>2024-10-23 15:11:41 -0700
committerJakub Kicinski <kuba@kernel.org>2024-10-30 17:59:16 -0700
commit4ddf7ccfdf70364010005b0b695b1a0d92677425 (patch)
treeeb1de096cee5f866e02ef034086628d125681b4f /drivers/net/ethernet/google
parente110225ec120bb4327b442601daffe82ca514bbf (diff)
gve: change to use page_pool_put_full_page when recycling pages
The driver currently uses page_pool_put_page() to recycle page pool pages. Since gve uses split pages, if the fragment being recycled is not the last fragment in the page, there is no dma sync operation. When the last fragment is recycled, dma sync is performed by page pool infra according to the value passed as dma_sync_size which right now is set to the size of fragment. But the correct thing to do is to dma sync the entire page when the last fragment is recycled. Hence change to using page_pool_put_full_page(). Link: https://lore.kernel.org/netdev/89d7ce83-cc1d-4791-87b5-6f7af29a031d@huawei.com/ Suggested-by: Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Harshitha Ramamurthy <hramamurthy@google.com> Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com> Fixes: ebdfae0d377b ("gve: adopt page pool for DQ RDA mode") Link: https://patch.msgid.link/20241023221141.3008011-1-pkaligineedi@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'drivers/net/ethernet/google')
-rw-r--r--drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c b/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c
index 05bf1f80a79c..403f0f335ba6 100644
--- a/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c
+++ b/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c
@@ -210,8 +210,7 @@ void gve_free_to_page_pool(struct gve_rx_ring *rx,
if (!page)
return;
- page_pool_put_page(page->pp, page, buf_state->page_info.buf_size,
- allow_direct);
+ page_pool_put_full_page(page->pp, page, allow_direct);
buf_state->page_info.page = NULL;
}