CVE-2026-23086
Unknown Unknown - Not Provided
Memory Exhaustion via Unbounded TX Credit in Linux virtio-vsock

Publication date: 2026-02-04

Last updated on: 2026-03-17

Assigner: kernel.org

Description
In the Linux kernel, the following vulnerability has been resolved: vsock/virtio: cap TX credit to local buffer size The virtio transports derives its TX credit directly from peer_buf_alloc, which is set from the remote endpoint's SO_VM_SOCKETS_BUFFER_SIZE value. On the host side this means that the amount of data we are willing to queue for a connection is scaled by a guest-chosen buffer size, rather than the host's own vsock configuration. A malicious guest can advertise a large buffer and read slowly, causing the host to allocate a correspondingly large amount of sk_buff memory. The same thing would happen in the guest with a malicious host, since virtio transports share the same code base. Introduce a small helper, virtio_transport_tx_buf_size(), that returns min(peer_buf_alloc, buf_alloc), and use it wherever we consume peer_buf_alloc. This ensures the effective TX window is bounded by both the peer's advertised buffer and our own buf_alloc (already clamped to buffer_max_size via SO_VM_SOCKETS_BUFFER_MAX_SIZE), so a remote peer cannot force the other to queue more data than allowed by its own vsock settings. On an unpatched Ubuntu 22.04 host (~64 GiB RAM), running a PoC with 32 guest vsock connections advertising 2 GiB each and reading slowly drove Slab/SUnreclaim from ~0.5 GiB to ~57 GiB; the system only recovered after killing the QEMU process. That said, if QEMU memory is limited with cgroups, the maximum memory used will be limited. With this patch applied: Before: MemFree: ~61.6 GiB Slab: ~142 MiB SUnreclaim: ~117 MiB After 32 high-credit connections: MemFree: ~61.5 GiB Slab: ~178 MiB SUnreclaim: ~152 MiB Only ~35 MiB increase in Slab/SUnreclaim, no host OOM, and the guest remains responsive. Compatibility with non-virtio transports: - VMCI uses the AF_VSOCK buffer knobs to size its queue pairs per socket based on the local vsk->buffer_* values; the remote side cannot enlarge those queues beyond what the local endpoint configured. - Hyper-V's vsock transport uses fixed-size VMBus ring buffers and an MTU bound; there is no peer-controlled credit field comparable to peer_buf_alloc, and the remote endpoint cannot drive in-flight kernel memory above those ring sizes. - The loopback path reuses virtio_transport_common.c, so it naturally follows the same semantics as the virtio transport. This change is limited to virtio_transport_common.c and thus affects virtio-vsock, vhost-vsock, and loopback, bringing them in line with the "remote window intersected with local policy" behaviour that VMCI and Hyper-V already effectively have. [Stefano: small adjustments after changing the previous patch] [Stefano: tweak the commit message]
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-02-04
Last Modified
2026-03-17
Generated
2026-05-07
AI Q&A
2026-02-04
EPSS Evaluated
2026-05-05
NVD
Affected Vendors & Products
Showing 10 associated CPEs
Vendor Product Version / Range
linux linux_kernel 6.19
linux linux_kernel 6.19
linux linux_kernel 6.19
linux linux_kernel 6.19
linux linux_kernel 6.19
linux linux_kernel From 6.2 (inc) to 6.6.122 (exc)
linux linux_kernel From 6.7 (inc) to 6.12.68 (exc)
linux linux_kernel From 6.13 (inc) to 6.18.8 (exc)
linux linux_kernel 6.19
linux linux_kernel From 4.8 (inc) to 6.1.162 (exc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-UNKNOWN
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability exists in the Linux kernel's virtio transport for vsock, where the transmit (TX) credit is derived directly from a buffer size value controlled by the remote endpoint (peer_buf_alloc). This means a malicious guest can advertise a very large buffer size and read data slowly, causing the host to allocate an excessive amount of kernel memory (sk_buff), potentially leading to resource exhaustion.

The issue arises because the host scales the amount of data it queues for a connection based on the guest's advertised buffer size rather than its own configuration, allowing the guest to force the host to use more memory than intended.

The fix introduces a helper function that limits the TX buffer size to the minimum of the peer's advertised buffer and the local buffer allocation, ensuring that neither side can force the other to queue more data than allowed by its own settings.


How can this vulnerability impact me? :

This vulnerability can lead to excessive memory consumption on the host system when a malicious guest advertises large buffer sizes and reads data slowly. This can cause the host's kernel memory usage to spike dramatically, potentially leading to system instability or the need to kill processes to recover memory.

In practical terms, an unpatched system with sufficient RAM could see slab memory usage increase from hundreds of megabytes to tens of gigabytes, which may degrade performance or cause out-of-memory conditions.

However, with the patch applied, memory usage remains stable even under attack conditions, preventing host out-of-memory scenarios and keeping the guest responsive.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

I don't know


How can this vulnerability be detected on my network or system? Can you suggest some commands?

I don't know


What immediate steps should I take to mitigate this vulnerability?

This vulnerability has been resolved by a patch that limits the TX credit to the minimum of the peer's advertised buffer and the local buffer allocation, preventing a remote peer from forcing excessive memory allocation.

To mitigate this vulnerability immediately, ensure your Linux kernel is updated with the patch that introduces the virtio_transport_tx_buf_size() helper function, which bounds the effective TX window.

If you are running virtual machines using virtio-vsock or vhost-vsock, update your host system to a kernel version that includes this fix.

Additionally, consider limiting QEMU memory usage with cgroups to reduce the impact of potential exploitation.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart