CVE-2026-40087
Received Received - Intake
Incomplete F-String Validation in LangChain Prompt Templates

Publication date: 2026-04-09

Last updated on: 2026-04-16

Assigner: GitHub, Inc.

Description
LangChain is a framework for building agents and LLM-powered applications. Prior to 0.3.84 and 1.2.28, LangChain's f-string prompt-template validation was incomplete in two respects. First, some prompt template classes accepted f-string templates and formatted them without enforcing the same attribute-access validation as PromptTemplate. In particular, DictPromptTemplate and ImagePromptTemplate could accept templates containing attribute access or indexing expressions and subsequently evaluate those expressions during formatting. Second, f-string validation based on parsed top-level field names did not reject nested replacement fields inside format specifiers. In this pattern, the nested replacement field appears in the format specifier rather than in the top-level field name. As a result, earlier validation based on parsed field names did not reject the template even though Python formatting would still attempt to resolve the nested expression at runtime. This vulnerability is fixed in 0.3.84 and 1.2.28.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-04-09
Last Modified
2026-04-16
Generated
2026-05-07
AI Q&A
2026-04-10
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 2 associated CPEs
Vendor Product Version / Range
langchain langchain_core to 0.3.84 (exc)
langchain langchain_core From 1.0.0 (inc) to 1.2.28 (exc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-1336 The product uses a template engine to insert or process externally-influenced input, but it does not neutralize or incorrectly neutralizes special elements or syntax that can be interpreted as template expressions or other code directives when processed by the engine.
Attack-Flow Graph
AI Powered Q&A
How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

CVE-2026-40087 involves incomplete validation of f-string prompt templates in LangChain, which could allow unauthorized disclosure of internal data through template injection attacks. This vulnerability could lead to leakage of sensitive or internal information if untrusted template strings are accepted and richer Python objects are formatted, potentially exposing data in prompt outputs, model contexts, or logs.

Such unauthorized disclosure of internal or sensitive data may impact compliance with data protection regulations like GDPR or HIPAA, which require safeguarding personal and sensitive information against unauthorized access or exposure.

However, the vulnerability primarily affects scenarios where untrusted templates are accepted and unsafe attribute access or indexing is allowed in f-string templates. Applications using hardcoded templates or only accepting untrusted variable values without template modification are not affected.

The fix implemented in versions 0.3.84 and 1.2.28 strengthens validation to prevent such template injection attacks, thereby reducing the risk of unauthorized data disclosure and helping maintain compliance with relevant security and privacy standards.


Can you explain this vulnerability to me?

CVE-2026-40087 is a vulnerability in LangChain's prompt-template handling related to incomplete validation of f-string templates before versions 0.3.84 and 1.2.28. Specifically, some prompt template classes like DictPromptTemplate and ImagePromptTemplate accepted f-string templates containing attribute access (e.g., obj.attr) or indexing expressions (e.g., obj[0]) without enforcing strict validation. This allowed these expressions to be evaluated during formatting, potentially exposing internal object state.

Additionally, the vulnerability involved a flaw in f-string validation where nested replacement fields inside format specifiers were not rejected. For example, a template like "{name:{name.__class__.__name__}}" contains a nested field within the format specifier, which earlier validation failed to detect, allowing runtime evaluation of nested expressions.

The root cause was that validation was incomplete and inconsistent across different prompt template classes, allowing unsafe templates to be accepted and evaluated, which could lead to unauthorized access to internal data or code injection.

The vulnerability was fixed by introducing a centralized validation function that strictly parses and validates f-string templates to reject attribute access, indexing, purely numeric variable names, and nested replacement fields inside format specifiers. This validation was integrated into all relevant prompt template classes and enforced both at construction and deserialization.


How can this vulnerability impact me? :

This vulnerability can lead to unauthorized disclosure of internal data when untrusted template strings are accepted and richer Python objects are passed into the formatting process. Because attribute access and indexing expressions in f-string templates are evaluated during formatting, an attacker could craft malicious templates that access sensitive internal object attributes or data.

The impact severity is moderate with a CVSS score of 5.3. The attack vector is network-based with low complexity, requiring no privileges or user interaction.

If exploited, this vulnerability could expose internal object state or sensitive information into prompt outputs, model contexts, or logs, potentially leaking confidential data.

Applications that use hardcoded templates or only accept untrusted variable values without allowing template modification are not affected. The main risk is when untrusted or user-controlled templates are processed without proper validation.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

This vulnerability relates to unsafe f-string prompt templates in LangChain that allow attribute access, indexing, or nested replacement fields, which can lead to unauthorized data exposure. Detection involves inspecting prompt templates used in your LangChain deployment to identify any templates containing unsafe constructs such as attribute access (e.g., "."), indexing (e.g., "[" or "]"), or nested replacement fields inside format specifiers.

Since the vulnerability is in the template strings themselves, detection requires reviewing the prompt templates in your system or application code, especially those using DictPromptTemplate or ImagePromptTemplate classes prior to versions 0.3.84 and 1.2.28.

There are no specific network commands or system commands provided in the resources to detect this vulnerability automatically. Instead, detection is done by validating the templates against the rules introduced in the fix, such as rejecting templates with attribute access or nested replacement fields.

If you have access to the LangChain source or environment, you can use or implement the `validate_f_string_template()` function described in the fix to programmatically check your templates for unsafe constructs.

In summary, detection involves:

  • Reviewing prompt templates for f-string usage with attribute access or indexing expressions.
  • Checking for nested replacement fields inside format specifiers in f-string templates.
  • Using or implementing validation functions similar to `validate_f_string_template()` to parse and validate templates.

No explicit command-line tools or network scanning commands are mentioned in the resources.


What immediate steps should I take to mitigate this vulnerability?

To mitigate CVE-2026-40087, you should immediately upgrade your LangChain packages to versions 0.3.84 or 1.2.28 or later, where the vulnerability has been fixed.

The fix involves enhanced validation and sanitization of f-string prompt templates to reject unsafe constructs such as attribute access, indexing, and nested replacement fields inside format specifiers.

Additional mitigation steps include:

  • Ensure that all prompt templates, especially those using DictPromptTemplate and ImagePromptTemplate, are validated using the updated validation functions that enforce strict parsing rules.
  • Avoid accepting untrusted or user-supplied template strings that could contain malicious f-string expressions.
  • Review and sanitize any existing prompt templates to remove attribute access or indexing expressions.
  • Incorporate the updated LangChain validation logic (`validate_f_string_template()`, Pydantic validators) into your deployment to prevent unsafe templates from being constructed or deserialized.

These steps prevent attackers from exploiting template injection vulnerabilities that could lead to unauthorized disclosure of internal data or code execution.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart