CVE-2026-6859
Arbitrary Code Execution in InstructLab via Hardcoded Trust Setting
Publication date: 2026-04-22
Last updated on: 2026-05-06
Assigner: Red Hat, Inc.
Description
Description
CVSS Scores
EPSS Scores
| Probability: | |
| Percentile: |
Meta Information
Affected Vendors & Products
| Vendor | Product | Version / Range |
|---|---|---|
| redhat | enterprise_linux_ai | 3.0 |
| redhat | instructlab | * |
Helpful Resources
Exploitability
| CWE ID | Description |
|---|---|
| CWE-829 | The product imports, requires, or includes executable functionality (such as a library) from a source that is outside of the intended control sphere. |
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?
CVE-2026-6859 is a high-severity vulnerability in InstructLab caused by the hardcoded setting `trust_remote_code=True` within the `linux_train.py` script for all HuggingFace `from_pretrained()` calls.
This configuration allows arbitrary Python code execution when loading models from potentially malicious repositories on the HuggingFace Hub.
An attacker only requires a free HuggingFace account to exploit this vulnerability by tricking a victim into running InstructLab commands such as train, download, or generate with a malicious model name.
How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:
This vulnerability allows arbitrary Python code execution leading to complete system compromise, which can result in unauthorized access to sensitive data.
Such unauthorized access and potential data breaches can negatively impact compliance with common standards and regulations like GDPR and HIPAA, which require protection of personal and sensitive information.
How can this vulnerability impact me? :
This vulnerability can lead to complete system compromise.
By exploiting the arbitrary Python code execution, a remote attacker can execute malicious code on the victim's Linux system running InstructLab.
What immediate steps should I take to mitigate this vulnerability?
The vulnerability arises from the hardcoded setting `trust_remote_code=True` in the `linux_train.py` script when loading models from HuggingFace. To mitigate this vulnerability, avoid running InstructLab commands such as train, download, or generate with models from untrusted or unknown HuggingFace repositories.
Additionally, review and modify the `linux_train.py` script to remove or disable the `trust_remote_code=True` setting when calling `from_pretrained()`, preventing arbitrary code execution.
Ensure users are aware not to run commands with models from potentially malicious sources and consider restricting network access or usage policies to trusted HuggingFace models only.