CVE-2026-43464
Received Received - Intake
Buffer Fragment Counting Issue in Linux Kernel's mlx5e Driver

Publication date: 2026-05-08

Last updated on: 2026-05-08

Assigner: kernel.org

Description
In the Linux kernel, the following vulnerability has been resolved: net/mlx5e: RX, Fix XDP multi-buf frag counting for legacy RQ XDP multi-buf programs can modify the layout of the XDP buffer when the program calls bpf_xdp_pull_data() or bpf_xdp_adjust_tail(). The referenced commit in the fixes tag corrected the assumption in the mlx5 driver that the XDP buffer layout doesn't change during a program execution. However, this fix introduced another issue: the dropped fragments still need to be counted on the driver side to avoid page fragment reference counting issues. Such issue can be observed with the test_xdp_native_adjst_tail_shrnk_data selftest when using a payload of 3600 and shrinking by 256 bytes (an upcoming selftest patch): the last fragment gets released by the XDP code but doesn't get tracked by the driver. This results in a negative pp_ref_count during page release and the following splat: WARNING: include/net/page_pool/helpers.h:297 at mlx5e_page_release_fragmented.isra.0+0x4a/0x50 [mlx5_core], CPU#12: ip/3137 Modules linked in: [...] CPU: 12 UID: 0 PID: 3137 Comm: ip Not tainted 6.19.0-rc3+ #12 NONE Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014 RIP: 0010:mlx5e_page_release_fragmented.isra.0+0x4a/0x50 [mlx5_core] [...] Call Trace: <TASK> mlx5e_dealloc_rx_wqe+0xcb/0x1a0 [mlx5_core] mlx5e_free_rx_descs+0x7f/0x110 [mlx5_core] mlx5e_close_rq+0x50/0x60 [mlx5_core] mlx5e_close_queues+0x36/0x2c0 [mlx5_core] mlx5e_close_channel+0x1c/0x50 [mlx5_core] mlx5e_close_channels+0x45/0x80 [mlx5_core] mlx5e_safe_switch_params+0x1a5/0x230 [mlx5_core] mlx5e_change_mtu+0xf3/0x2f0 [mlx5_core] netif_set_mtu_ext+0xf1/0x230 do_setlink.isra.0+0x219/0x1180 rtnl_newlink+0x79f/0xb60 rtnetlink_rcv_msg+0x213/0x3a0 netlink_rcv_skb+0x48/0xf0 netlink_unicast+0x24a/0x350 netlink_sendmsg+0x1ee/0x410 __sock_sendmsg+0x38/0x60 ____sys_sendmsg+0x232/0x280 ___sys_sendmsg+0x78/0xb0 __sys_sendmsg+0x5f/0xb0 [...] do_syscall_64+0x57/0xc50 This patch fixes the issue by doing page frag counting on all the original XDP buffer fragments for all relevant XDP actions (XDP_TX , XDP_REDIRECT and XDP_PASS). This is basically reverting to the original counting before the commit in the fixes tag. As frag_page is still pointing to the original tail, the nr_frags parameter to xdp_update_skb_frags_info() needs to be calculated in a different way to reflect the new nr_frags.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-05-08
Last Modified
2026-05-08
Generated
2026-05-09
AI Q&A
2026-05-08
EPSS Evaluated
N/A
NVD
EUVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
mlx mlx5_core *
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-UNKNOWN
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability exists in the Linux kernel's mlx5 driver related to XDP (eXpress Data Path) multi-buffer programs. These programs can modify the layout of the XDP buffer during execution using functions like bpf_xdp_pull_data() or bpf_xdp_adjust_tail(). The mlx5 driver incorrectly assumed that the buffer layout would not change during execution.

A fix was introduced to correct this assumption, but it caused another problem: the driver failed to properly count dropped fragments of the buffer. This improper counting leads to negative reference counts during page release, which can cause kernel warnings and potential instability.

The issue manifests as a warning and a kernel 'splat' (crash) related to page fragment reference counting, triggered under specific test conditions involving shrinking payload data in XDP multi-buffer programs.

The patch to fix this vulnerability restores proper fragment counting for all original XDP buffer fragments across relevant XDP actions (XDP_TX, XDP_REDIRECT, and XDP_PASS), ensuring stable and correct memory management in the driver.


How can this vulnerability impact me? :

This vulnerability can cause kernel instability or crashes due to incorrect reference counting of memory fragments in the mlx5 driver when using XDP multi-buffer programs.

Such instability may lead to unexpected system warnings, degraded network performance, or potential denial of service conditions on systems using the affected mlx5 driver with XDP features.

Systems relying on high-performance networking with mlx5 hardware and XDP multi-buffer programs are particularly at risk of encountering these issues.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

This vulnerability can be detected by observing kernel warnings related to mlx5_core, specifically messages indicating negative page fragment reference counts during page release. For example, kernel logs may show warnings like:

  • WARNING: include/net/page_pool/helpers.h:297 at mlx5e_page_release_fragmented.isra.0+0x4a/0x50 [mlx5_core]
  • CPU#12: ip/3137

To detect this on your system, you can monitor kernel logs using commands such as:

  • dmesg | grep mlx5_core
  • journalctl -k | grep mlx5_core

Additionally, running the selftest test_xdp_native_adjst_tail_shrnk_data with a payload of 3600 and shrinking by 256 bytes can reproduce the issue if the system is vulnerable.


What immediate steps should I take to mitigate this vulnerability?

The vulnerability has been fixed by a patch that correctly counts page fragments on all original XDP buffer fragments for relevant XDP actions (XDP_TX, XDP_REDIRECT, and XDP_PASS).

Immediate mitigation steps include:

  • Update the Linux kernel to a version that includes the fix for this vulnerability.
  • Avoid running XDP multi-buf programs that call bpf_xdp_pull_data() or bpf_xdp_adjust_tail() on affected mlx5 drivers until the patch is applied.
  • Monitor kernel logs for warnings related to mlx5_core and page fragment reference counts to detect potential exploitation.

Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart