CVE-2026-40252
Broken Access Control in FastGPT API Enables Cross-Tenant Access
Publication date: 2026-04-10
Last updated on: 2026-04-21
Assigner: GitHub, Inc.
Description
Description
CVSS Scores
EPSS Scores
| Probability: | |
| Percentile: |
Meta Information
Affected Vendors & Products
| Vendor | Product | Version / Range |
|---|---|---|
| fastgpt | fastgpt | to 4.14.10.4 (exc) |
Helpful Resources
Exploitability
| CWE ID | Description |
|---|---|
| CWE-284 | The product does not restrict or incorrectly restricts access to a resource from an unauthorized actor. |
| CWE-639 | The system's authorization functionality does not prevent one user from gaining access to another user's data or record by modifying the key value identifying the data. |
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?
This vulnerability is a Broken Access Control issue (specifically IDOR/BOLA) in FastGPT, an AI Agent building platform. Before version 4.14.10.4, any authenticated team could access and execute applications that belong to other teams by providing a foreign application ID (appId). Although the API validates the team token, it does not verify whether the requested application actually belongs to the authenticated team. This flaw allows unauthorized cross-tenant data exposure and execution of private AI workflows.
How can this vulnerability impact me? :
The vulnerability can lead to unauthorized access to other teams' applications and data within the FastGPT platform. This means that sensitive or private AI workflows and data from other teams can be exposed or executed without permission, potentially leading to data breaches, loss of confidentiality, and misuse of AI resources.
What immediate steps should I take to mitigate this vulnerability?
To mitigate this vulnerability, you should upgrade FastGPT to version 4.14.10.4 or later, where the Broken Access Control issue has been fixed.
How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:
The vulnerability allows cross-tenant data exposure and unauthorized execution of private AI workflows by enabling any authenticated team to access applications belonging to other teams. This unauthorized access to data and workflows can lead to violations of data protection and privacy requirements commonly mandated by standards and regulations such as GDPR and HIPAA.
Specifically, the exposure of private data across tenants may result in non-compliance with regulations that require strict access controls and data segregation to protect personal and sensitive information.