CVE-2025-40362
Unknown Unknown - Not Provided
BaseFortify

Publication date: 2025-12-16

Last updated on: 2025-12-18

Assigner: kernel.org

Description
In the Linux kernel, the following vulnerability has been resolved: ceph: fix multifs mds auth caps issue The mds auth caps check should also validate the fsname along with the associated caps. Not doing so would result in applying the mds auth caps of one fs on to the other fs in a multifs ceph cluster. The bug causes multiple issues w.r.t user authentication, following is one such example. Steps to Reproduce (on vstart cluster): 1. Create two file systems in a cluster, say 'fsname1' and 'fsname2' 2. Authorize read only permission to the user 'client.usr' on fs 'fsname1' $ceph fs authorize fsname1 client.usr / r 3. Authorize read and write permission to the same user 'client.usr' on fs 'fsname2' $ceph fs authorize fsname2 client.usr / rw 4. Update the keyring $ceph auth get client.usr >> ./keyring With above permssions for the user 'client.usr', following is the expectation. a. The 'client.usr' should be able to only read the contents and not allowed to create or delete files on file system 'fsname1'. b. The 'client.usr' should be able to read/write on file system 'fsname2'. But, with this bug, the 'client.usr' is allowed to read/write on file system 'fsname1'. See below. 5. Mount the file system 'fsname1' with the user 'client.usr' $sudo bin/mount.ceph [email protected]=/ /kmnt_fsname1_usr/ 6. Try creating a file on file system 'fsname1' with user 'client.usr'. This should fail but passes with this bug. $touch /kmnt_fsname1_usr/file1 7. Mount the file system 'fsname1' with the user 'client.admin' and create a file. $sudo bin/mount.ceph [email protected]=/ /kmnt_fsname1_admin $echo "data" > /kmnt_fsname1_admin/admin_file1 8. Try removing an existing file on file system 'fsname1' with the user 'client.usr'. This shoudn't succeed but succeeds with the bug. $rm -f /kmnt_fsname1_usr/admin_file1 For more information, please take a look at the corresponding mds/fuse patch and tests added by looking into the tracker mentioned below. v2: Fix a possible null dereference in doutc v3: Don't store fsname from mdsmap, validate against ceph_mount_options's fsname and use it v4: Code refactor, better warning message and fix possible compiler warning [ Slava.Dubeyko: "fsname check failed" -> "fsname mismatch" ]
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2025-12-16
Last Modified
2025-12-18
Generated
2026-05-07
AI Q&A
2025-12-16
EPSS Evaluated
2026-05-05
NVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
ceph ceph *
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-UNKNOWN
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability in the Linux kernel's Ceph file system involves improper validation of multi-file system (multifs) metadata server (mds) authorization capabilities (auth caps). Specifically, the mds auth caps check fails to validate the file system name (fsname) along with the associated caps. As a result, the authorization caps of one file system can be incorrectly applied to another file system within a multifs Ceph cluster. This means a user authorized with limited permissions on one file system could gain unauthorized elevated permissions (like read/write) on another file system.


How can this vulnerability impact me? :

This vulnerability can lead to unauthorized access and privilege escalation within a Ceph multifs cluster. A user with restricted permissions on one file system could gain higher permissions on another file system, such as being able to create, modify, or delete files where they should only have read access. This breaks the intended access controls and can compromise data integrity and confidentiality across multiple file systems.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

This vulnerability can be detected by testing the authorization behavior of the 'client.usr' user across multiple Ceph file systems. Specifically, you can follow these steps and commands to verify if the bug is present: 1. Authorize different permissions for 'client.usr' on two file systems, e.g., read-only on 'fsname1' and read-write on 'fsname2' using: $ ceph fs authorize fsname1 client.usr / r $ ceph fs authorize fsname2 client.usr / rw 2. Update the keyring: $ ceph auth get client.usr >> ./keyring 3. Mount 'fsname1' as 'client.usr': $ sudo bin/mount.ceph [email protected]=/ /kmnt_fsname1_usr/ 4. Attempt to create a file on 'fsname1' (which should fail if the vulnerability is not present): $ touch /kmnt_fsname1_usr/file1 5. Mount 'fsname1' as 'client.admin' and create a file: $ sudo bin/mount.ceph [email protected]=/ /kmnt_fsname1_admin $ echo "data" > /kmnt_fsname1_admin/admin_file1 6. Try removing the file as 'client.usr' (which should fail if the vulnerability is not present): $ rm -f /kmnt_fsname1_usr/admin_file1 If the creation or deletion succeeds for 'client.usr' on 'fsname1' despite only having read permission, the vulnerability exists.


What immediate steps should I take to mitigate this vulnerability?

Immediate mitigation involves updating the Ceph cluster to a version where the vulnerability is fixed, as the issue arises from improper validation of mds auth caps across multiple file systems. Until the patch is applied, carefully review and restrict user permissions to minimize risk, and monitor for unauthorized file operations by users with limited permissions. Additionally, avoid running untrusted clients with elevated permissions on multi-filesystem Ceph clusters.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart