CVE-2026-40156
Untrusted File Load in PraisonAI Enables Arbitrary Code Execution
Publication date: 2026-04-10
Last updated on: 2026-04-20
Assigner: GitHub, Inc.
Description
Description
CVSS Scores
EPSS Scores
| Probability: | |
| Percentile: |
Meta Information
Affected Vendors & Products
| Vendor | Product | Version / Range |
|---|---|---|
| praison | praisonai | to 4.5.128 (exc) |
Helpful Resources
Exploitability
| CWE ID | Description |
|---|---|
| CWE-426 | The product searches for critical resources using an externally-supplied search path that can point to resources that are not under the product's direct control. |
| CWE-94 | The product constructs all or part of a code segment using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the syntax or behavior of the intended code segment. |
| CWE-829 | The product imports, requires, or includes executable functionality (such as a library) from a source that is outside of the intended control sphere. |
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?
PraisonAI versions prior to 4.5.128 automatically load and execute a file named tools.py from the current working directory without explicit user consent, validation, or sandboxing.
This loading process uses importlib.util.spec_from_file_location and immediately runs the module-level code, even if tools.py is not referenced in configuration files or explicitly requested.
As a result, if an attacker can place a malicious tools.py file in the working directory where PraisonAI runs, arbitrary code execution will occur immediately upon startup.
This behavior breaks the expected security boundary between user-controlled project files and executable code, treating untrusted content as trusted and executing it automatically.
How can this vulnerability impact me? :
This vulnerability allows an attacker to execute arbitrary code on the system running PraisonAI simply by placing a malicious tools.py file in the working directory.
Such arbitrary code execution can lead to full compromise of the affected system, including unauthorized access, data theft, data modification, or disruption of services.
Because the code runs immediately upon startup, before any agent logic begins, it can bypass many security controls or detection mechanisms.
What immediate steps should I take to mitigate this vulnerability?
To mitigate this vulnerability, upgrade PraisonAI to version 4.5.128 or later, where the issue is fixed.
Additionally, ensure that no untrusted tools.py files exist in the working directories where PraisonAI is run, as the presence of such a file triggers automatic code execution.
How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:
This vulnerability allows arbitrary code execution by loading and executing untrusted code from a file named tools.py in the current working directory without user consent or validation.
Such arbitrary code execution can lead to exfiltration of environment variables and credentials, persistence on developer or CI systems, and unauthorized access to sensitive data.
These impacts can compromise the confidentiality, integrity, and availability of data, which are critical requirements under common standards and regulations like GDPR and HIPAA.
Therefore, exploitation of this vulnerability could result in non-compliance with these regulations due to potential unauthorized data access, data breaches, and failure to protect sensitive information.
How can this vulnerability be detected on my network or system? Can you suggest some commands?
This vulnerability can be detected by checking if a file named tools.py exists in the current working directory where PraisonAI is executed, as its presence triggers automatic code execution.
To detect potential exploitation or presence of malicious tools.py files, you can list files named tools.py in directories where PraisonAI runs.
- Use the command: find /path/to/praisonai/projects -name tools.py
- Check for unexpected files created by malicious code, such as pwned.txt, which was used as a demonstration of exploitation.
- Monitor execution logs or run PraisonAI in a controlled environment to observe if any unexpected code executes upon startup.
Since the vulnerability involves automatic execution of tools.py without explicit user consent, auditing the working directories for untrusted or unexpected tools.py files is critical.