CVE-2025-62372
Unknown Unknown - Not Provided
BaseFortify

Publication date: 2025-11-21

Last updated on: 2025-12-04

Assigner: GitHub, Inc.

Description
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2025-11-21
Last Modified
2025-12-04
Generated
2026-05-07
AI Q&A
2025-11-21
EPSS Evaluated
2026-05-05
NVD
Affected Vendors & Products
Showing 3 associated CPEs
Vendor Product Version / Range
vllm vllm From 0.5.5 (inc) to 0.11.1 (exc)
vllm vllm 0.11.1
vllm vllm 0.11.1
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-129 The product uses untrusted input when calculating or using an array index, but the product does not validate or incorrectly validates the index to ensure the index references a valid position within the array.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability in vLLM versions 0.5.5 to before 0.11.1 allows an attacker to crash the vLLM engine serving multimodal models by passing multimodal embedding inputs that have the correct number of dimensions (ndim) but an incorrect shape, such as a wrong hidden dimension. This can happen regardless of whether the model is intended to support such inputs. The issue was fixed in version 0.11.1.


How can this vulnerability impact me? :

The vulnerability can cause the vLLM engine to crash when processing certain malformed multimodal embedding inputs. This can lead to denial of service, disrupting the availability of the inference and serving engine for large language models, potentially impacting applications relying on vLLM for multimodal model serving.


What immediate steps should I take to mitigate this vulnerability?

Upgrade the vLLM engine to version 0.11.1 or later, as this version contains the patch that fixes the vulnerability allowing crashes from malformed multimodal embedding inputs.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart