CVE-2026-22807
BaseFortify
Publication date: 2026-01-21
Last updated on: 2026-01-30
Assigner: GitHub, Inc.
Description
Description
CVSS Scores
EPSS Scores
| Probability: | |
| Percentile: |
Meta Information
Affected Vendors & Products
| Vendor | Product | Version / Range |
|---|---|---|
| vllm | vllm | From 0.10.1 (inc) to 0.14.0 (exc) |
Helpful Resources
Exploitability
| CWE ID | Description |
|---|---|
| CWE-94 | The product constructs all or part of a code segment using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the syntax or behavior of the intended code segment. |
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?
This vulnerability in vLLM versions from 0.10.1 up to but not including 0.14.0 allows attacker-controlled Python code to execute on the server during model loading. It occurs because vLLM loads Hugging Face 'auto_map' dynamic modules without verifying the 'trust_remote_code' setting, enabling arbitrary code execution if an attacker can influence the model repository or path. This happens at server startup before any request handling and does not require API access.
How can this vulnerability impact me? :
An attacker who can control the model repository or path can execute arbitrary code on the vLLM host server during model loading. This can lead to full compromise of the server, including data theft, service disruption, or further attacks within the network. The vulnerability does not require API access and occurs before any request handling, making it particularly dangerous.
What immediate steps should I take to mitigate this vulnerability?
Upgrade vLLM to version 0.14.0 or later, as this version fixes the vulnerability by properly gating the loading of Hugging Face `auto_map` dynamic modules with `trust_remote_code` to prevent arbitrary code execution.