CVE-2025-62426
Unknown Unknown - Not Provided
BaseFortify

Publication date: 2025-11-21

Last updated on: 2025-12-04

Assigner: GitHub, Inc.

Description
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2025-11-21
Last Modified
2025-12-04
Generated
2026-05-06
AI Q&A
2025-11-21
EPSS Evaluated
2026-05-05
NVD
Affected Vendors & Products
Showing 3 associated CPEs
Vendor Product Version / Range
vllm vllm From 0.5.5 (inc) to 0.11.1 (exc)
vllm vllm 0.11.1
vllm vllm 0.11.1
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-770 The product allocates a reusable resource or group of resources on behalf of an actor without imposing any intended restrictions on the size or number of resources that can be allocated.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability in vLLM versions 0.5.5 to before 0.11.1 involves the /v1/chat/completions and /tokenize endpoints accepting a chat_template_kwargs request parameter that is used before proper validation against the chat template. An attacker can craft specific chat_template_kwargs parameters to block the API server's processing for long periods, causing delays to all other requests. The issue was fixed in version 0.11.1.


How can this vulnerability impact me? :

The vulnerability can cause denial of service by blocking the API server's processing for extended periods, which delays or prevents other legitimate requests from being handled. This can disrupt availability and reliability of services relying on the vLLM API.


What immediate steps should I take to mitigate this vulnerability?

Upgrade vLLM to version 0.11.1 or later, where the vulnerability has been patched.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart