CVE-2023-53151
Unknown Unknown - Not Provided
BaseFortify

Publication date: 2025-09-15

Last updated on: 2025-11-24

Assigner: kernel.org

Description
In the Linux kernel, the following vulnerability has been resolved: md/raid10: prevent soft lockup while flush writes Currently, there is no limit for raid1/raid10 plugged bio. While flushing writes, raid1 has cond_resched() while raid10 doesn't, and too many writes can cause soft lockup. Follow up soft lockup can be triggered easily with writeback test for raid10 with ramdisks: watchdog: BUG: soft lockup - CPU#10 stuck for 27s! [md0_raid10:1293] Call Trace: <TASK> call_rcu+0x16/0x20 put_object+0x41/0x80 __delete_object+0x50/0x90 delete_object_full+0x2b/0x40 kmemleak_free+0x46/0xa0 slab_free_freelist_hook.constprop.0+0xed/0x1a0 kmem_cache_free+0xfd/0x300 mempool_free_slab+0x1f/0x30 mempool_free+0x3a/0x100 bio_free+0x59/0x80 bio_put+0xcf/0x2c0 free_r10bio+0xbf/0xf0 raid_end_bio_io+0x78/0xb0 one_write_done+0x8a/0xa0 raid10_end_write_request+0x1b4/0x430 bio_endio+0x175/0x320 brd_submit_bio+0x3b9/0x9b7 [brd] __submit_bio+0x69/0xe0 submit_bio_noacct_nocheck+0x1e6/0x5a0 submit_bio_noacct+0x38c/0x7e0 flush_pending_writes+0xf0/0x240 raid10d+0xac/0x1ed0 Fix the problem by adding cond_resched() to raid10 like what raid1 did. Note that unlimited plugged bio still need to be optimized, for example, in the case of lots of dirty pages writeback, this will take lots of memory and io will spend a long time in plug, hence io latency is bad.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2025-09-15
Last Modified
2025-11-24
Generated
2026-05-06
AI Q&A
2025-09-15
EPSS Evaluated
2026-05-05
NVD
Affected Vendors & Products
Showing 7 associated CPEs
Vendor Product Version / Range
linux linux_kernel From 5.15.160 (inc) to 5.16 (inc)
linux linux_kernel From 5.15.160 (inc) to 5.16 (inc)
linux linux_kernel From 5.15.160 (inc) to 5.16 (inc)
linux linux_kernel From 5.15.160 (inc) to 5.16 (inc)
linux linux_kernel From 5.15.160 (inc) to 5.16 (inc)
linux linux_kernel From 5.15.160 (inc) to 5.16 (inc)
linux linux_kernel From 5.15.160 (inc) to 5.16 (inc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-667 The product does not properly acquire or release a lock on a resource, leading to unexpected resource state changes and behaviors.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability exists in the Linux kernel's md/raid10 subsystem where there is no limit on raid1/raid10 plugged bio operations. Specifically, during flush writes, raid1 calls cond_resched() to yield the CPU, but raid10 does not. This can cause too many writes to accumulate and lead to a soft lockup, where the CPU gets stuck for an extended period. The issue was fixed by adding cond_resched() to raid10 similar to raid1 to prevent the CPU from being stuck.


How can this vulnerability impact me? :

This vulnerability can cause a soft lockup in the system's CPU during heavy write operations on raid10 devices. This means the CPU can become unresponsive or stuck for a long time, leading to degraded system performance, increased IO latency, and potentially impacting system stability during intensive disk write workloads.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

This vulnerability can be detected by observing soft lockup warnings related to raid10 devices in the system logs. For example, messages like 'watchdog: BUG: soft lockup - CPU#X stuck for Ys! [md0_raid10:PID]' indicate the issue. You can monitor the kernel logs using commands such as 'dmesg | grep -i soft lockup' or 'journalctl -k | grep -i soft lockup'. Additionally, running writeback tests on raid10 devices with ramdisks may reproduce the soft lockup condition.


What immediate steps should I take to mitigate this vulnerability?

Immediate mitigation involves updating the Linux kernel to a version where the fix has been applied, which adds cond_resched() calls to raid10 to prevent soft lockups during flush writes. Until the update is applied, monitoring for soft lockup warnings and avoiding heavy writeback loads on raid10 devices can reduce the risk. Note that the underlying issue is related to unlimited plugged bio and may require kernel optimization.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart