CVE-2026-5972
Received Received - Intake
OS Command Injection in FoundationAgents MetaGPT Terminal.run_command

Publication date: 2026-04-09

Last updated on: 2026-04-29

Assigner: VulDB

Description
A vulnerability has been found in FoundationAgents MetaGPT up to 0.8.1. This issue affects the function Terminal.run_command in the library metagpt/tools/libs/terminal.py. The manipulation leads to os command injection. Remote exploitation of the attack is possible. The exploit has been disclosed to the public and may be used. The identifier of the patch is d04ffc8dc67903e8b327f78ec121df5e190ffc7b. Applying a patch is the recommended action to fix this issue.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-04-09
Last Modified
2026-04-29
Generated
2026-05-07
AI Q&A
2026-04-09
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
deepwisdom metagpt to 0.8.1 (inc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-77 The product constructs all or part of a command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended command when it is sent to a downstream component.
CWE-78 The product constructs all or part of an OS command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended OS command when it is sent to a downstream component.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

CVE-2026-5972 is a command injection vulnerability found in the MetaGPT project, specifically in the Terminal.run_command() function. This function executes shell commands but uses a weak blocklist that only filters out two specific command substrings, leaving it vulnerable to arbitrary command execution.

Because the Terminal class is exposed as a tool callable by large language models (LLMs), an attacker can exploit prompt injection attacks to execute arbitrary shell commands remotely. The vulnerability allows dangerous commands like file deletion, downloading and executing malicious scripts, or running arbitrary Python code.

The vulnerability arises from insufficient input filtering and the direct writing of commands to a persistent bash shell's stdin without proper sanitization, enabling remote code execution (RCE).


How can this vulnerability impact me? :

This vulnerability can have severe impacts including full system compromise. An attacker exploiting this flaw can execute arbitrary commands on the affected system remotely.

  • Reading and writing sensitive files without authorization.
  • Installing malware or backdoors on the system.
  • Exfiltrating sensitive data.
  • Performing lateral movement to other systems within the network.

Any user running MetaGPT with agentic workflows where the LLM can be influenced by external input is at risk.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

Detection of this vulnerability involves monitoring for unusual or unauthorized command executions that exploit the command injection flaw in the Terminal.run_command function.

Since the vulnerability allows remote code execution via LLM prompt injection, you can look for evidence of commands that bypass the weak blocklist, such as creation of unexpected files (e.g., files like /tmp/terminal_rce_proof.txt) or execution of commands like id, curl, or rm.

Suggested commands to detect exploitation attempts include:

  • Check for suspicious files created by exploits: `ls -l /tmp/terminal_rce_proof.txt`
  • Search system logs for unusual command executions or errors related to MetaGPT: `grep -i metagpt /var/log/syslog` or `journalctl -u metagpt`
  • Monitor running processes for unexpected commands: `ps aux | grep -E 'curl|bash|python'`
  • Use network monitoring tools to detect outbound connections initiated by suspicious commands: `netstat -tunp` or `ss -tunp`

Additionally, reviewing the MetaGPT logs or enabling verbose logging for the Terminal tool may help identify command injection attempts.


What immediate steps should I take to mitigate this vulnerability?

The primary and recommended mitigation is to apply the official patch identified by commit d04ffc8dc67903e8b327f78ec121df5e190ffc7b, which introduces strict command validation and a safe command allowlist.

Key immediate mitigation steps include:

  • Update MetaGPT to the patched version that implements a safe command allowlist, blocking dangerous commands and shell metacharacters.
  • Disable or restrict the use of the Terminal.run_command function in environments where untrusted input can influence LLM prompts.
  • Implement sandboxing or containerization to isolate command execution and limit potential damage from exploitation.
  • Require explicit user approval before executing any shell commands triggered by LLM agents.
  • Monitor systems for signs of exploitation attempts as described in detection steps.

These steps collectively reduce the risk of remote code execution and help secure the MetaGPT environment against this vulnerability.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

The vulnerability CVE-2026-5972 allows remote code execution through command injection in the MetaGPT project, potentially leading to unauthorized access to sensitive data and system compromise.

Such unauthorized access and potential data exfiltration could impact compliance with data protection regulations like GDPR and HIPAA, which require strict controls over data confidentiality, integrity, and system security.

If exploited, this vulnerability could lead to breaches involving personal or protected health information, thereby violating regulatory requirements and exposing organizations to legal and financial penalties.

Mitigation through patching and implementing strict command validation is essential to maintain compliance and reduce the risk of regulatory violations.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart