CVE-2025-15379
Command Injection in MLflow Model Serving Enables Remote Code Execution
Publication date: 2026-03-30
Last updated on: 2026-04-28
Assigner: huntr.dev
Description
Description
CVSS Scores
EPSS Scores
| Probability: | |
| Percentile: |
Meta Information
Affected Vendors & Products
| Vendor | Product | Version / Range |
|---|---|---|
| lfprojects | mlflow | From 3.8.0 (inc) to 3.8.1 (inc) |
Helpful Resources
Exploitability
| CWE ID | Description |
|---|---|
| CWE-77 | The product constructs all or part of a command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended command when it is sent to a downstream component. |
Attack-Flow Graph
AI Powered Q&A
What immediate steps should I take to mitigate this vulnerability?
The primary mitigation is to upgrade MLflow to version 3.8.2 or later, where the vulnerability is fixed.
The fix involves replacing vulnerable shell command execution with safe subprocess calls that do not invoke a shell and sanitize dependency arguments.
Until upgrading, avoid deploying models with untrusted or unverified python_env.yaml files, especially when using `env_manager=LOCAL`.
- Upgrade MLflow to version 3.8.2 or newer.
- Validate and sanitize model artifact dependencies before deployment.
- Avoid using `env_manager=LOCAL` with untrusted models.
Can you explain this vulnerability to me?
This vulnerability is a command injection flaw in MLflow's model serving container initialization, specifically in the function that installs model dependencies. When deploying a model with the environment manager set to LOCAL, MLflow reads dependency specifications from the model's python_env.yaml file and directly inserts them into a shell command without sanitizing the input. This allows an attacker to craft a malicious model artifact that includes harmful commands, which get executed on the system deploying the model.
The root cause was that MLflow constructed a shell command string concatenating dependencies and executed it via a shell, enabling injection of arbitrary shell commands through specially crafted dependency strings. The fix involved replacing this with a safer method that builds a list of arguments for subprocess execution without invoking a shell, sanitizing inputs, and preventing command injection.
How can this vulnerability impact me? :
This vulnerability can have severe impacts because it allows an attacker to execute arbitrary commands on the system deploying the MLflow model. This means an attacker could potentially take full control of the affected system, leading to data theft, system compromise, installation of malware, or disruption of services.
Given the CVSS score of 10.0, the vulnerability is critical and can be exploited remotely without any privileges or user interaction, making it highly dangerous.
How can this vulnerability be detected on my network or system? Can you suggest some commands?
Detection of this vulnerability involves checking if MLflow versions prior to 3.8.2 are deployed and if model artifacts with potentially malicious python_env.yaml dependency specifications are being used.
Since the vulnerability arises from command injection via unsanitized shell commands during dependency installation, you can look for suspicious shell commands or unexpected file writes triggered by MLflow's model serving container initialization.
There are no explicit detection commands provided, but you can monitor for unusual process executions or file creations (e.g., unexpected files in /tmp) during model deployment.
- Check MLflow version: `mlflow --version` or inspect installed package version.
- Audit model artifacts for suspicious entries in `python_env.yaml` dependency specifications.
- Monitor system logs and process executions for unexpected commands or file writes during model deployment.
How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:
The vulnerability allows arbitrary command execution on systems deploying MLflow models, which can lead to unauthorized access, data breaches, or system compromise.
Such security risks can impact compliance with standards and regulations like GDPR and HIPAA, which require protection of sensitive data and secure system operations.
If exploited, this vulnerability could result in violations of data protection requirements, unauthorized data access, or disruption of services, all of which are critical compliance concerns.