CVE-2026-22807
Unknown Unknown - Not Provided
BaseFortify

Publication date: 2026-01-21

Last updated on: 2026-01-30

Assigner: GitHub, Inc.

Description
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-01-21
Last Modified
2026-01-30
Generated
2026-05-07
AI Q&A
2026-01-22
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
vllm vllm From 0.10.1 (inc) to 0.14.0 (exc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-94 The product constructs all or part of a code segment using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the syntax or behavior of the intended code segment.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability in vLLM versions from 0.10.1 up to but not including 0.14.0 allows attacker-controlled Python code to execute on the server during model loading. It occurs because vLLM loads Hugging Face 'auto_map' dynamic modules without verifying the 'trust_remote_code' setting, enabling arbitrary code execution if an attacker can influence the model repository or path. This happens at server startup before any request handling and does not require API access.


How can this vulnerability impact me? :

An attacker who can control the model repository or path can execute arbitrary code on the vLLM host server during model loading. This can lead to full compromise of the server, including data theft, service disruption, or further attacks within the network. The vulnerability does not require API access and occurs before any request handling, making it particularly dangerous.


What immediate steps should I take to mitigate this vulnerability?

Upgrade vLLM to version 0.14.0 or later, as this version fixes the vulnerability by properly gating the loading of Hugging Face `auto_map` dynamic modules with `trust_remote_code` to prevent arbitrary code execution.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart