CVE-2026-27893
Received Received - Intake
Remote Code Execution via Hardcoded Trust in vLLM Models

Publication date: 2026-03-27

Last updated on: 2026-03-30

Assigner: GitHub, Inc.

Description
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-03-27
Last Modified
2026-03-30
Generated
2026-05-07
AI Q&A
2026-03-27
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
vllm vllm From 0.10.1 (inc) to 0.18.0 (exc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-693 The product does not use or incorrectly uses a protection mechanism that provides sufficient defense against directed attacks against the product.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

CVE-2026-27893 is a security vulnerability in the vLLM inference and serving engine for large language models. In versions from 0.10.1 up to but not including 0.18.0, two model implementation files hardcoded the parameter `trust_remote_code=True` when loading sub-components. This hardcoding bypasses the user's explicit security setting to disable remote code trust (`--trust-remote-code=False`). As a result, malicious model repositories can execute arbitrary remote code during model loading, even if the user has opted out of trusting remote code.

The vulnerability arises because the code does not respect the user's security preference and blindly trusts remote code execution in certain model components, specifically in the NemotronVL and KimiK25 models. This issue was patched in version 0.18.0 by changing the code to dynamically use the user's configured `trust_remote_code` value instead of hardcoding it to true.


How can this vulnerability impact me? :

This vulnerability can have severe impacts including remote code execution (RCE) by attackers through malicious model repositories. Because the system blindly trusts remote code in certain components, attackers can execute arbitrary Python code during model loading.

  • No privileges are required to exploit this vulnerability.
  • The attack complexity is low, but user interaction is required.
  • The attack vector is network-based, meaning an attacker can exploit it remotely.
  • The vulnerability can lead to high confidentiality, integrity, and availability impacts, potentially compromising sensitive data, altering system behavior, or causing denial of service.

How can this vulnerability be detected on my network or system? Can you suggest some commands?

This vulnerability arises from the hardcoded use of `trust_remote_code=True` in two specific model implementation files within the vLLM project, bypassing the user's security setting. To detect if your system is vulnerable, you can check the version of the vLLM package installed and inspect the relevant files for the hardcoded parameter.

  • Check the installed vLLM version: `pip show vllm` or `vllm --version`.
  • If the version is >=0.10.1 and <0.18.0, the system is potentially vulnerable.
  • Search for hardcoded `trust_remote_code=True` in the model files, for example:
  • ```bash grep -r "trust_remote_code=True" $(python -c "import vllm; print(vllm.__path__[0])")/model_executor/models/ ```

This command searches the model implementation directory for occurrences of the hardcoded parameter. Presence of such lines indicates vulnerability.


What immediate steps should I take to mitigate this vulnerability?

The primary mitigation is to upgrade the vLLM package to version 0.18.0 or later, where the vulnerability has been patched.

If upgrading immediately is not possible, you can manually patch the affected files by replacing the hardcoded `trust_remote_code=True` with a dynamic check that respects the user's configuration, specifically using `self.model_config.trust_remote_code`.

  • Upgrade vLLM to version 0.18.0 or later.
  • Verify that the model loading code respects the `trust_remote_code` setting from the model configuration.
  • Avoid loading models from untrusted or unknown remote repositories until the patch is applied.

These steps ensure that remote code execution via malicious model repositories is prevented by enforcing the user's explicit security preferences.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

The vulnerability CVE-2026-27893 allows remote code execution by bypassing the user's explicit security setting to not trust remote code. This leads to high risks to confidentiality, integrity, and availability of data processed by the affected system.

Such a security flaw can impact compliance with common standards and regulations like GDPR and HIPAA, which require protection of sensitive data and prevention of unauthorized access or execution of malicious code. The ability for an attacker to execute arbitrary code remotely could lead to data breaches or unauthorized data manipulation, violating these regulatory requirements.

Therefore, until patched, this vulnerability poses a significant compliance risk by undermining security controls designed to protect sensitive information.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart