CVE-2026-6859
Received Received - Intake
Arbitrary Code Execution in InstructLab via Hardcoded Trust Setting

Publication date: 2026-04-22

Last updated on: 2026-05-06

Assigner: Red Hat, Inc.

Description
A flaw was found in InstructLab. The `linux_train.py` script hardcodes `trust_remote_code=True` when loading models from HuggingFace. This allows a remote attacker to achieve arbitrary Python code execution by convincing a user to run `ilab train/download/generate` with a specially crafted malicious model from the HuggingFace Hub. This vulnerability can lead to complete system compromise.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-04-22
Last Modified
2026-05-06
Generated
2026-05-07
AI Q&A
2026-04-22
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 2 associated CPEs
Vendor Product Version / Range
redhat enterprise_linux_ai 3.0
redhat instructlab *
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-829 The product imports, requires, or includes executable functionality (such as a library) from a source that is outside of the intended control sphere.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

CVE-2026-6859 is a high-severity vulnerability in InstructLab caused by the hardcoded setting `trust_remote_code=True` within the `linux_train.py` script for all HuggingFace `from_pretrained()` calls.

This configuration allows arbitrary Python code execution when loading models from potentially malicious repositories on the HuggingFace Hub.

An attacker only requires a free HuggingFace account to exploit this vulnerability by tricking a victim into running InstructLab commands such as train, download, or generate with a malicious model name.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

This vulnerability allows arbitrary Python code execution leading to complete system compromise, which can result in unauthorized access to sensitive data.

Such unauthorized access and potential data breaches can negatively impact compliance with common standards and regulations like GDPR and HIPAA, which require protection of personal and sensitive information.


How can this vulnerability impact me? :

This vulnerability can lead to complete system compromise.

By exploiting the arbitrary Python code execution, a remote attacker can execute malicious code on the victim's Linux system running InstructLab.


What immediate steps should I take to mitigate this vulnerability?

The vulnerability arises from the hardcoded setting `trust_remote_code=True` in the `linux_train.py` script when loading models from HuggingFace. To mitigate this vulnerability, avoid running InstructLab commands such as train, download, or generate with models from untrusted or unknown HuggingFace repositories.

Additionally, review and modify the `linux_train.py` script to remove or disable the `trust_remote_code=True` setting when calling `from_pretrained()`, preventing arbitrary code execution.

Ensure users are aware not to run commands with models from potentially malicious sources and consider restricting network access or usage policies to trusted HuggingFace models only.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart