CVE-2026-5971
Improper Neutralization in MetaGPT XML Handler Enables Remote Code Execution
Publication date: 2026-04-09
Last updated on: 2026-04-29
Assigner: VulDB
Description
Description
CVSS Scores
EPSS Scores
| Probability: | |
| Percentile: |
Meta Information
Affected Vendors & Products
| Vendor | Product | Version / Range |
|---|---|---|
| deepwisdom | metagpt | to 0.8.1 (inc) |
Helpful Resources
Exploitability
| CWE ID | Description |
|---|---|
| CWE-94 | The product constructs all or part of a code segment using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the syntax or behavior of the intended code segment. |
| CWE-95 | The product receives input from an upstream component, but it does not neutralize or incorrectly neutralizes code syntax before using the input in a dynamic evaluation call (e.g. "eval"). |
Attack-Flow Graph
AI Powered Q&A
How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:
The vulnerability CVE-2026-5971 allows remote attackers to execute arbitrary code on the host system by exploiting unsafe use of eval() on untrusted inputs. This can lead to unauthorized access to sensitive files and full system compromise.
Such unauthorized access and potential data breaches can negatively impact compliance with common standards and regulations like GDPR and HIPAA, which require protection of sensitive personal and health information against unauthorized access and disclosure.
Because the vulnerability enables remote code execution and potential data exposure, organizations using affected versions of MetaGPT may face increased risk of non-compliance with these regulations if exploited.
Can you explain this vulnerability to me?
CVE-2026-5971 is a critical remote code execution (RCE) vulnerability in FoundationAgents MetaGPT, specifically in the ActionNode component's xml_fill method. The vulnerability arises because the method uses Python's unsafe eval() function to parse string data extracted from Large Language Model (LLM) responses into Python objects for fields typed as list or dict without any sanitization.
An attacker who can influence the LLM output (for example, via prompt injection or a compromised model) can inject malicious Python code inside XML tags. When the xml_fill method processes this output, it executes the injected code, allowing arbitrary command execution on the host system.
This flaw is due to improper neutralization of directives in dynamically evaluated code and unsafe use of eval(), leading to full system compromise.
How can this vulnerability impact me? :
Successful exploitation of this vulnerability allows remote attackers to execute arbitrary Python code on the host system running MetaGPT.
- Attackers can run arbitrary commands, potentially accessing sensitive files.
- They can install backdoors or malware, leading to persistent system compromise.
- The system's integrity and confidentiality can be severely impacted.
How can this vulnerability be detected on my network or system? Can you suggest some commands?
This vulnerability can be detected by checking for the presence of the vulnerable MetaGPT version (up to 0.8.1) and by monitoring for suspicious execution of Python code via the `eval()` function in the `xml_fill` method of the file `metagpt/actions/action_node.py`.
Since the vulnerability involves remote code execution through crafted XML tags, detection can include monitoring for unexpected file creations or command executions triggered by XML input, such as the creation of files like `/tmp/verify_rce_realworld`.
Suggested commands to detect exploitation attempts or presence of the vulnerability include:
- Check for the vulnerable MetaGPT version installed: `pip show metagpt` or inspect the version in your environment.
- Search for suspicious files created by exploitation attempts, e.g., `ls -l /tmp/verify_rce_realworld`.
- Monitor running Python processes for unusual activity or commands executed via `eval()`, for example using `ps aux | grep python` and inspecting logs.
- Use file integrity monitoring tools to detect unexpected changes in files or directories related to MetaGPT.
What immediate steps should I take to mitigate this vulnerability?
Immediate mitigation steps include preventing the unsafe execution of untrusted code by avoiding the use of `eval()` on untrusted input in the `xml_fill` method.
Since the project has not yet reacted to the vulnerability, a recommended approach is to implement sandboxing or isolation for code execution.
One effective mitigation is to integrate a sandboxed execution environment such as `exec-sandbox`, which runs code inside lightweight QEMU microVMs, providing strong hardware-level isolation and preventing host compromise.
- Replace unsafe `eval()` calls with safe parsing methods like `ast.literal_eval` or `json.loads` to avoid executing arbitrary code.
- Use the `exec-sandbox` solution to isolate code execution, as it provides VM-based containment with no host filesystem access and controlled network access.
- Avoid running MetaGPT in privileged containers or directly on the host without isolation.
- Monitor and restrict inputs to the vulnerable XML handler to prevent injection of malicious payloads.