CVE-2025-40362
BaseFortify
Publication date: 2025-12-16
Last updated on: 2025-12-18
Assigner: kernel.org
Description
Description
CVSS Scores
EPSS Scores
| Probability: | |
| Percentile: |
Meta Information
Affected Vendors & Products
| Vendor | Product | Version / Range |
|---|---|---|
| ceph | ceph | * |
Helpful Resources
Exploitability
| CWE ID | Description |
|---|---|
| CWE-UNKNOWN |
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?
This vulnerability in the Linux kernel's Ceph file system involves improper validation of multi-file system (multifs) metadata server (mds) authorization capabilities (auth caps). Specifically, the mds auth caps check fails to validate the file system name (fsname) along with the associated caps. As a result, the authorization caps of one file system can be incorrectly applied to another file system within a multifs Ceph cluster. This means a user authorized with limited permissions on one file system could gain unauthorized elevated permissions (like read/write) on another file system.
How can this vulnerability impact me? :
This vulnerability can lead to unauthorized access and privilege escalation within a Ceph multifs cluster. A user with restricted permissions on one file system could gain higher permissions on another file system, such as being able to create, modify, or delete files where they should only have read access. This breaks the intended access controls and can compromise data integrity and confidentiality across multiple file systems.
How can this vulnerability be detected on my network or system? Can you suggest some commands?
This vulnerability can be detected by testing the authorization behavior of the 'client.usr' user across multiple Ceph file systems. Specifically, you can follow these steps and commands to verify if the bug is present: 1. Authorize different permissions for 'client.usr' on two file systems, e.g., read-only on 'fsname1' and read-write on 'fsname2' using: $ ceph fs authorize fsname1 client.usr / r $ ceph fs authorize fsname2 client.usr / rw 2. Update the keyring: $ ceph auth get client.usr >> ./keyring 3. Mount 'fsname1' as 'client.usr': $ sudo bin/mount.ceph [email protected]=/ /kmnt_fsname1_usr/ 4. Attempt to create a file on 'fsname1' (which should fail if the vulnerability is not present): $ touch /kmnt_fsname1_usr/file1 5. Mount 'fsname1' as 'client.admin' and create a file: $ sudo bin/mount.ceph [email protected]=/ /kmnt_fsname1_admin $ echo "data" > /kmnt_fsname1_admin/admin_file1 6. Try removing the file as 'client.usr' (which should fail if the vulnerability is not present): $ rm -f /kmnt_fsname1_usr/admin_file1 If the creation or deletion succeeds for 'client.usr' on 'fsname1' despite only having read permission, the vulnerability exists.
What immediate steps should I take to mitigate this vulnerability?
Immediate mitigation involves updating the Ceph cluster to a version where the vulnerability is fixed, as the issue arises from improper validation of mds auth caps across multiple file systems. Until the patch is applied, carefully review and restrict user permissions to minimize risk, and monitor for unauthorized file operations by users with limited permissions. Additionally, avoid running untrusted clients with elevated permissions on multi-filesystem Ceph clusters.