CVE-2026-2473
Received Received - Intake
Bucket Squatting in Google Vertex AI Enables Remote Code Execution

Publication date: 2026-02-20

Last updated on: 2026-02-20

Assigner: GoogleCloud

Description
Predictable bucket naming in Vertex AI Experiments in Google Cloud Vertex AI from version 1.21.0 up toΒ (but not including) 1.133.0 on Google Cloud Platform allows an unauthenticated remote attacker to achieve cross-tenant remote code execution, model theft, and poisoning via pre-creating predictably named Cloud Storage buckets (Bucket Squatting). This vulnerability was patched and no customer action is needed.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-02-20
Last Modified
2026-02-20
Generated
2026-05-07
AI Q&A
2026-02-20
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 2 associated CPEs
Vendor Product Version / Range
google cloud_vertex_ai From 1.21.0 (inc) to 1.133.0 (exc)
google cloud_vertex_ai to 1.133.0 (exc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-340 The product uses a scheme that generates numbers or identifiers that are more predictable than required.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability involves predictable bucket naming in Vertex AI Experiments on Google Cloud Platform versions from 1.21.0 up to (but not including) 1.133.0. An unauthenticated remote attacker can exploit this by pre-creating Cloud Storage buckets with predictable names, a technique known as Bucket Squatting.

By doing so, the attacker can achieve cross-tenant remote code execution, steal machine learning models, and poison models used by other tenants.


How can this vulnerability impact me? :

The impact of this vulnerability includes the risk of unauthorized remote code execution across tenants, which can compromise system integrity and security.

Additionally, attackers can steal proprietary machine learning models, leading to intellectual property loss.

Model poisoning can degrade the performance or reliability of AI models, potentially causing incorrect or harmful outputs.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

I don't know


How can this vulnerability be detected on my network or system? Can you suggest some commands?

I don't know


What immediate steps should I take to mitigate this vulnerability?

This vulnerability has been patched in Google Cloud Vertex AI versions from 1.133.0 onwards.

No customer action is needed as the issue is fixed in the updated versions.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart