CVE-2026-6110
Received Received - Intake
Remote Code Injection in FoundationAgents MetaGPT Tree-of-Thought Solver

Publication date: 2026-04-12

Last updated on: 2026-04-30

Assigner: VulDB

Description
A vulnerability was identified in FoundationAgents MetaGPT up to 0.8.1. This affects the function generate_thoughts of the file metagpt/strategy/tot.py of the component Tree-of-Thought Solver. The manipulation leads to code injection. It is possible to initiate the attack remotely. The exploit is publicly available and might be used. The project was informed of the problem early through an issue report but has not responded yet.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-04-12
Last Modified
2026-04-30
Generated
2026-05-07
AI Q&A
2026-04-12
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 2 associated CPEs
Vendor Product Version / Range
deepwisdom metagpt 0.8.1
deepwisdom metagpt 0.8.0
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-74 The product constructs all or part of a command, data structure, or record using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify how it is parsed or interpreted when it is sent to a downstream component.
CWE-94 The product constructs all or part of a code segment using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the syntax or behavior of the intended code segment.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

CVE-2026-6110 is a critical Remote Code Execution (RCE) vulnerability in the Tree-of-Thought (ToT) solver component of the MetaGPT project. The vulnerability exists because the ToT solver uses Python's unsafe eval() function to parse responses from large language models (LLMs) without validation. This allows an attacker to manipulate the LLM output through prompt injection to execute arbitrary Python code on the host system.

Specifically, the vulnerable code evaluates the LLM response directly, which can include malicious code such as system commands. This can be exploited remotely by crafting malicious inputs that cause the LLM to return harmful code. The unsafe eval() call was replaced with safe JSON parsing (json.loads()) to eliminate this risk.


How can this vulnerability impact me? :

This vulnerability allows remote attackers to execute arbitrary code on the affected system by manipulating the output of the LLM used in the MetaGPT Tree-of-Thought solver. This can lead to full system compromise, unauthorized access, data theft, or disruption of services.

  • Attackers can run system commands remotely.
  • Malicious payloads can be injected via prompt injection.
  • Compromised or poisoned LLM models can be exploited.
  • Man-in-the-middle attacks can inject malicious LLM API responses.
  • Supply chain attacks via poisoned prompt templates are possible.

How can this vulnerability be detected on my network or system? Can you suggest some commands?

This vulnerability can be detected by checking if the vulnerable version of MetaGPT is in use, specifically versions up to 0.8.1 that include the unsafe use of Python's eval() function in the file metagpt/strategy/tot.py within the generate_thoughts() method.

To detect exploitation attempts, you can monitor for suspicious system activity such as unexpected command executions or creation of unusual files (e.g., files like /tmp/tot_eval_rce_proof.txt which may indicate proof-of-concept exploitation).

Since the exploit involves remote code execution triggered by malicious LLM responses, network detection could include monitoring outgoing or incoming traffic for unusual API calls to LLM services or unexpected payloads.

  • Check for existence of suspicious files created by exploits, e.g., run: ls /tmp/tot_eval_rce_proof.txt
  • Search logs for errors or suspicious entries related to JSON parsing failures or eval() execution.
  • Audit the metagpt/strategy/tot.py file for the presence of eval() usage on LLM outputs, e.g., grep -n 'eval(' metagpt/strategy/tot.py

What immediate steps should I take to mitigate this vulnerability?

The immediate mitigation step is to update the MetaGPT project to a version where the vulnerability is fixed by replacing the unsafe eval() call with safe JSON parsing using json.loads().

If an update is not immediately available, manually patch the vulnerable code in metagpt/strategy/tot.py by replacing the eval() call with a safe parsing method such as json.loads() wrapped in a try/except block to handle parsing errors gracefully.

  • Replace: thoughts = eval(thoughts)
  • With: try: thoughts = json.loads(thoughts) except json.JSONDecodeError as e: logger.error(f"Failed to parse LLM response as JSON: {e}. Raw response: {thoughts}") thoughts = []

Additionally, monitor and restrict access to the LLM API to prevent prompt injection attacks and consider implementing input validation and output sanitization.

Apply comprehensive security tests to ensure no code execution occurs during parsing and that invalid inputs are handled safely.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

The provided information does not explicitly address how the CVE-2026-6110 vulnerability affects compliance with common standards and regulations such as GDPR or HIPAA.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart