CVE-2026-33873
Received Received - Intake
Arbitrary Code Execution in Langflow Agentic Assistant Pre

Publication date: 2026-03-27

Last updated on: 2026-04-03

Assigner: GitHub, Inc.

Description
Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assistant feature in Langflow executes LLM-generated Python code during its validation phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side. In deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution. Version 1.9.0 fixes the issue.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-03-27
Last Modified
2026-04-03
Generated
2026-05-07
AI Q&A
2026-03-27
EPSS Evaluated
2026-05-05
NVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
langflow langflow to 1.9.0 (exc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-94 The product constructs all or part of a code segment using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the syntax or behavior of the intended code segment.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

CVE-2026-33873 is a vulnerability in Langflow's Agentic Assistant feature prior to version 1.9.0. This feature generates and validates Python component code using large language models (LLMs). During the validation phase, the system dynamically executes the LLM-generated Python code server-side by compiling and running it, including instantiating the generated classes. Because this execution is dynamic and not static, if an attacker can access the Agentic Assistant feature and influence the model output, they can inject arbitrary Python code that will be executed on the server.

The vulnerability arises because the validation process uses Python's exec() function to run the generated code, crossing a critical trust boundary. This means malicious code embedded in the generated component can run with the privileges of the Langflow server process. The vulnerability requires authentication or access to the Agentic Assistant feature, but default settings like AUTO_LOGIN enabled can increase exposure.

Version 1.9.0 of Langflow fixes this issue by changing the validation approach to avoid dynamic execution of untrusted code.


How can this vulnerability impact me? :

If exploited, this vulnerability allows an attacker with access to the Agentic Assistant feature to execute arbitrary Python code on the Langflow server. This can lead to severe impacts including:

  • Execution of arbitrary operating system commands.
  • Reading from or writing to the server's file system.
  • Disclosure of credentials or sensitive secrets stored on the server.
  • Full compromise of the Langflow process, potentially allowing further lateral movement or persistent access.

The vulnerability requires authentication or access to the Agentic Assistant feature, but insecure default configurations such as enabling AUTO_LOGIN can allow attackers to bypass authentication and exploit the vulnerability more easily.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

Detection of this vulnerability involves monitoring access to the Agentic Assistant feature endpoints such as `/assist` and `/assist/stream`, which accept user inputs that influence LLM-generated Python code execution. Since the vulnerability requires authentication or access to these features, reviewing authentication logs for unusual or unauthorized access attempts is important.

Additionally, detection can focus on identifying dynamic execution of Python code during validation, which occurs in the `create_class()` function that uses `exec()` to run code extracted from LLM responses. Monitoring for unexpected Python execution or suspicious process activity related to Langflow may help.

Specific commands are not provided in the resources, but general approaches include:

  • Review web server or application logs for requests to `/assist` and `/assist/stream` endpoints.
  • Check authentication logs for use of default or auto-login credentials, especially if `AUTO_LOGIN` is enabled.
  • Use process monitoring tools (e.g., `ps`, `top`, or Windows Task Manager) to detect unusual Python processes spawned by Langflow.
  • Inspect network traffic for suspicious API calls or unusual payloads targeting the Agentic Assistant feature.

Because the vulnerability involves dynamic execution of code during validation, static code analysis or runtime monitoring tools that detect execution of unexpected Python code or commands may be helpful.


What immediate steps should I take to mitigate this vulnerability?

The primary mitigation step is to upgrade Langflow to version 1.9.0 or later, where the vulnerability is fixed by redesigning the validation process to avoid dynamic execution of untrusted code.

If upgrading immediately is not possible, the following steps should be taken:

  • Disable the `AUTO_LOGIN` feature in production environments to prevent automatic login with default superuser credentials, which can increase exposure.
  • Restrict access to the Agentic Assistant feature endpoints (`/assist` and `/assist/stream`) to trusted and authenticated users only.
  • Review and harden authentication settings, including API key validation and JWT configurations, to ensure strong authentication and authorization controls.
  • Monitor and audit usage of the Agentic Assistant feature to detect any suspicious activity.

Longer-term mitigation involves redesigning the validation mechanism to use static analysis or sandboxed execution environments instead of dynamic execution (`exec()`) of generated code.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

CVE-2026-33873 allows an authenticated attacker to execute arbitrary Python code on the server via the Agentic Assistant feature in Langflow. This can lead to unauthorized OS command execution, file system access including reading and writing files, and disclosure of credentials or secrets.

Such unauthorized access and potential data exposure can compromise the confidentiality and integrity of sensitive data, which are core requirements under common standards and regulations like GDPR and HIPAA.

If personal data or protected health information is stored or processed by Langflow, this vulnerability could lead to violations of data protection obligations, including failure to implement adequate security measures to protect data against unauthorized access or disclosure.

Additionally, the presence of the AUTO_LOGIN feature enabled in production environments increases the risk of unauthorized access, further impacting compliance with security best practices required by these regulations.

Mitigation steps such as upgrading to version 1.9.0 and disabling AUTO_LOGIN in production are critical to maintaining compliance and reducing risk.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart