CVE-2026-34070
Received Received - Intake
Directory Traversal in LangChain Prompts Allows Arbitrary File Read

Publication date: 2026-03-31

Last updated on: 2026-04-02

Assigner: GitHub, Inc.

Description
LangChain is a framework for building agents and LLM-powered applications. Prior to version 1.2.22, multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples). This issue has been patched in version 1.2.22.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-03-31
Last Modified
2026-04-02
Generated
2026-05-07
AI Q&A
2026-03-31
EPSS Evaluated
2026-05-05
NVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
langchain langchain to 1.2.22 (exc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-22 The product uses external input to construct a pathname that is intended to identify a file or directory that is located underneath a restricted parent directory, but the product does not properly neutralize special elements within the pathname that can cause the pathname to resolve to a location that is outside of the restricted directory.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

CVE-2026-34070 is a path traversal vulnerability in the LangChain framework's legacy prompt-loading functions prior to version 1.2.22. Specifically, multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized configuration dictionaries without validating against directory traversal sequences (like '..') or absolute path injections.

This means that if an attacker can influence or control the prompt configuration passed to functions like load_prompt() or load_prompt_from_config(), they can cause the application to read arbitrary files on the host filesystem. The only constraint on file reading is the file extension, which must be .txt for templates or .json/.yaml for examples.

The vulnerability was fixed in version 1.2.22 by adding strict path validation that rejects absolute paths and directory traversal components, and by deprecating unsafe serialization methods in favor of safer alternatives.


How can this vulnerability impact me? :

This vulnerability can allow an attacker to read arbitrary files on the host filesystem remotely without any privileges or user interaction. This can lead to unauthorized disclosure of sensitive information such as cloud-mounted secrets, system files, internal prompts, cloud credentials, Kubernetes manifests, CI/CD configurations, and application settings.

Because the attack requires only controlling the prompt configuration dictionary, it can be exploited remotely over the network with low complexity.

The confidentiality impact is high, but there is no impact on integrity or availability.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

This vulnerability involves unsafe file path handling in the langchain_core.prompts.loading functions, allowing an attacker to read arbitrary files by exploiting directory traversal or absolute path injection in prompt configuration dictionaries.

Detection can focus on identifying usage of vulnerable functions such as load_prompt(), load_prompt_from_config(), or the deprecated save() method on prompt classes in versions prior to 1.2.22.

Since the vulnerability is triggered by user-influenced prompt configurations containing absolute paths or directory traversal sequences (e.g., '..'), you can audit your prompt configuration files or logs for suspicious path entries.

There are no specific commands provided in the resources to detect exploitation on your network or system, but general approaches include:

  • Searching your codebase or runtime environment for usage of vulnerable functions (e.g., grep for load_prompt or save in langchain-core versions < 1.2.22).
  • Monitoring application logs for errors or warnings related to path validation failures or unexpected file access attempts.
  • Checking prompt configuration files for absolute paths or directory traversal patterns in keys like template_path, examples, or example_prompt_path.

What immediate steps should I take to mitigate this vulnerability?

The primary mitigation is to upgrade langchain-core to version 1.2.22 or later, where strict path validation was introduced to reject absolute paths and directory traversal sequences by default.

Avoid using the deprecated legacy prompt serialization and loading methods such as save() and load_prompt(), and instead migrate to the newer serialization/deserialization APIs (dumpd, dumps, load, loads) from the langchain_core.load module, which do not perform unsafe filesystem reads.

If you must use legacy APIs for trusted inputs, use the allow_dangerous_paths=False flag to enforce path validation and prevent exploitation.

Review and sanitize any user-influenced prompt configuration data to ensure it does not contain absolute paths or directory traversal sequences.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

CVE-2026-34070 allows an attacker to read arbitrary files on the host filesystem by exploiting path traversal and absolute path injection vulnerabilities in legacy prompt-loading functions of the langchain-core package. This unauthorized file disclosure can include sensitive files such as cloud-mounted secrets, system files, cloud credentials, Kubernetes manifests, CI/CD configurations, and application settings.

Such unauthorized access to sensitive data can lead to violations of data protection regulations and standards like GDPR and HIPAA, which require strict controls over the confidentiality and integrity of personal and sensitive information.

Mitigation involves upgrading to langchain-core version 1.2.22 or later, which adds strict path validation to prevent directory traversal and absolute path exploits, thereby reducing the risk of unauthorized data disclosure and helping maintain compliance with these standards.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart