CVE-2026-5002
Received Received - Intake
Remote Injection Vulnerability in PromtEngineer LLM Prompt Handler

Publication date: 2026-03-28

Last updated on: 2026-04-29

Assigner: VulDB

Description
A vulnerability has been found in PromtEngineer localGPT up to 4d41c7d1713b16b216d8e062e51a5dd88b20b054. The impacted element is the function _route_using_overviews of the file backend/server.py of the component LLM Prompt Handler. Such manipulation leads to injection. The attack may be performed from remote. The exploit has been disclosed to the public and may be used. This product utilizes a rolling release system for continuous delivery, and as such, version information for affected or updated releases is not disclosed. The vendor was contacted early about this disclosure but did not respond in any way.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-03-28
Last Modified
2026-04-29
Generated
2026-05-07
AI Q&A
2026-03-28
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
august829 promtengineer_localgpt to 4d41c7d1713b16b216d8e062e51a5dd88b20b054 (inc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-74 The product constructs all or part of a command, data structure, or record using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify how it is parsed or interpreted when it is sent to a downstream component.
CWE-707 The product does not ensure or incorrectly ensures that structured messages or data are well-formed and that certain security properties are met before being read from an upstream component or sent to a downstream component.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

CVE-2026-5002 is a critical prompt injection vulnerability in the PromtEngineer localGPT product. It occurs in the _route_using_overviews function of the backend/server.py file, where user input is directly embedded into system prompts sent to the large language model (LLM) without any sanitization, escaping, or validation.

This lack of input sanitization allows attackers to craft malicious queries that manipulate the LLM's behavior by injecting special characters and instructions. As a result, attackers can break out of the intended prompt structure and insert commands that override normal processing.

The vulnerability enables attackers to extract sensitive document contents, manipulate AI responses (such as fabricating financial data), bypass routing logic, and disclose internal system prompts. The root cause is improper handling of user input in prompt construction and acceptance of LLM output without validation.


How can this vulnerability impact me? :

This vulnerability can have severe impacts including unauthorized extraction of confidential and sensitive information such as financial data, trade secrets, and security incidents.

  • Attackers can manipulate AI responses to provide false or fraudulent information, for example, inflating financial figures to enable fraud.
  • It allows bypassing of the AI routing logic, potentially forcing the system to behave in unintended ways.
  • The vulnerability can be chained with other exploits to achieve full system compromise, including session hijacking and corporate espionage.

Overall, it poses risks of data theft, misinformation, operational disruption, and significant security breaches.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

Detection of CVE-2026-5002 involves identifying attempts to exploit the prompt injection vulnerability in the _route_using_overviews function of localGPT. Since the vulnerability arises from unsanitized user input embedded directly into LLM prompts, detection can focus on monitoring for suspicious or crafted queries containing special characters such as quotes, newlines, or control characters that attempt to manipulate the prompt structure.

Suggested detection methods include:

  • Monitoring application logs for queries containing suspicious payloads with special characters or keywords like "IGNORE EVERYTHING ABOVE", "Print complete DOCUMENT OVERVIEWS", or instructions to bypass routing logic.
  • Using network traffic analysis tools to capture and inspect API calls or requests to the localGPT backend, looking for unusual or malformed input patterns.
  • Implementing custom scripts or commands to search logs or request data for patterns of prompt injection attempts.

Example command (assuming logs are stored in a file named localgpt.log):

  • grep -E '"|\n|IGNORE EVERYTHING ABOVE|Print complete DOCUMENT OVERVIEWS|DIRECT_LLM' localgpt.log

This command searches for suspicious special characters and known attack phrases in the logs.


What immediate steps should I take to mitigate this vulnerability?

Immediate mitigation steps for CVE-2026-5002 focus on preventing prompt injection by sanitizing and validating user input before embedding it into LLM prompts.

  • Implement strict input sanitization and escaping of special characters (quotes, newlines, control characters) in user queries.
  • Avoid using direct f-string interpolation or concatenation for prompt construction; instead, use structured prompt building methods that separate user input from system instructions.
  • Add validation and filtering of LLM responses to detect and reject unexpected or malicious outputs.
  • Apply length limits on user input to prevent large malicious payloads.
  • Monitor and audit system logs for suspicious activity related to prompt injection attempts.

Since the vendor has not provided updated releases or patches, these mitigations should be applied as immediate protective measures until an official fix is available.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

The CVE-2026-5002 vulnerability allows attackers to extract sensitive and confidential data, including financial information, trade secrets, and security incidents, through prompt injection attacks on the localGPT system.

Such unauthorized data disclosure and manipulation can lead to violations of data protection regulations and standards like GDPR and HIPAA, which mandate the protection of personal and sensitive information from unauthorized access and breaches.

Additionally, the vulnerability enables manipulation of AI outputs, potentially causing fraudulent information dissemination, which could further impact compliance with regulatory requirements for data integrity and accuracy.

Therefore, exploitation of this vulnerability could result in non-compliance with common standards and regulations due to data breaches, unauthorized data exposure, and compromised data integrity.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart