CVE-2025-68211
Unknown Unknown - Not Provided
BaseFortify

Publication date: 2025-12-16

Last updated on: 2026-02-26

Assigner: kernel.org

Description
In the Linux kernel, the following vulnerability has been resolved: ksm: use range-walk function to jump over holes in scan_get_next_rmap_item Currently, scan_get_next_rmap_item() walks every page address in a VMA to locate mergeable pages. This becomes highly inefficient when scanning large virtual memory areas that contain mostly unmapped regions, causing ksmd to use large amount of cpu without deduplicating much pages. This patch replaces the per-address lookup with a range walk using walk_page_range(). The range walker allows KSM to skip over entire unmapped holes in a VMA, avoiding unnecessary lookups. This problem was previously discussed in [1]. Consider the following test program which creates a 32 TiB mapping in the virtual address space but only populates a single page: #include <unistd.h> #include <stdio.h> #include <sys/mman.h> /* 32 TiB */ const size_t size = 32ul * 1024 * 1024 * 1024 * 1024; int main() { char *area = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_NORESERVE | MAP_PRIVATE | MAP_ANON, -1, 0); if (area == MAP_FAILED) { perror("mmap() failed\n"); return -1; } /* Populate a single page such that we get an anon_vma. */ *area = 0; /* Enable KSM. */ madvise(area, size, MADV_MERGEABLE); pause(); return 0; } $ ./ksm-sparse & $ echo 1 > /sys/kernel/mm/ksm/run Without this patch ksmd uses 100% of the cpu for a long time (more then 1 hour in my test machine) scanning all the 32 TiB virtual address space that contain only one mapped page. This makes ksmd essentially deadlocked not able to deduplicate anything of value. With this patch ksmd walks only the one mapped page and skips the rest of the 32 TiB virtual address space, making the scan fast using little cpu.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2025-12-16
Last Modified
2026-02-26
Generated
2026-05-07
AI Q&A
2025-12-16
EPSS Evaluated
2026-05-05
NVD
Affected Vendors & Products
Showing 11 associated CPEs
Vendor Product Version / Range
linux linux_kernel From 6.7 (inc) to 6.12.59 (exc)
linux linux_kernel From 5.11 (inc) to 5.15.199 (exc)
linux linux_kernel From 6.2 (inc) to 6.6.121 (exc)
linux linux_kernel From 5.16 (inc) to 6.1.161 (exc)
linux linux_kernel 6.18
linux linux_kernel 6.18
linux linux_kernel 6.18
linux linux_kernel 6.18
linux linux_kernel 6.18
linux linux_kernel From 2.6.32 (inc) to 5.10.249 (exc)
linux linux_kernel From 6.13 (inc) to 6.17.9 (exc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-UNKNOWN
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability involves the Linux kernel's Kernel Samepage Merging (KSM) feature, where the function scan_get_next_rmap_item() inefficiently scans every page address in a large virtual memory area (VMA), including unmapped regions. This causes the KSM daemon (ksmd) to use excessive CPU resources when scanning large VMAs with mostly unmapped pages, leading to poor performance and ineffective memory deduplication. The patch fixes this by using a range-walk function (walk_page_range()) to skip over unmapped holes, significantly improving efficiency.


How can this vulnerability impact me? :

If unpatched, this vulnerability can cause the KSM daemon (ksmd) to consume 100% CPU for extended periods when scanning large virtual memory areas with sparse mappings. This results in high CPU usage without effective memory deduplication, potentially degrading system performance and responsiveness.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

This vulnerability can be detected by monitoring the CPU usage of the ksmd process. If ksmd is using 100% CPU for an extended period while scanning large virtual memory areas with mostly unmapped regions, it may indicate the presence of this issue. You can check ksmd CPU usage with commands like 'top' or 'htop'. Additionally, you can verify if KSM is running by checking the value in /sys/kernel/mm/ksm/run using 'cat /sys/kernel/mm/ksm/run'.


What immediate steps should I take to mitigate this vulnerability?

To mitigate this vulnerability, update the Linux kernel to a version that includes the patch replacing the per-address lookup with a range walk using walk_page_range() in scan_get_next_rmap_item(). This patch allows KSM to skip unmapped holes efficiently, reducing CPU usage. As a temporary measure, you can disable KSM by writing '0' to /sys/kernel/mm/ksm/run using 'echo 0 > /sys/kernel/mm/ksm/run' to prevent high CPU usage until the kernel is updated.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart