CVE-2026-1839
Received Received - Intake
Arbitrary Code Execution in HuggingFace Trainer via Unsafe Checkpoint Load

Publication date: 2026-04-07

Last updated on: 2026-04-28

Assigner: huntr.dev

Description
A vulnerability in the HuggingFace Transformers library, specifically in the `Trainer` class, allows for arbitrary code execution. The `_load_rng_state()` method in `src/transformers/trainer.py` at line 3059 calls `torch.load()` without the `weights_only=True` parameter. This issue affects all versions of the library supporting `torch>=2.2` when used with PyTorch versions below 2.6, as the `safe_globals()` context manager provides no protection in these versions. An attacker can exploit this vulnerability by supplying a malicious checkpoint file, such as `rng_state.pth`, which can execute arbitrary code when loaded. The issue is resolved in version v5.0.0rc3.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-04-07
Last Modified
2026-04-28
Generated
2026-05-07
AI Q&A
2026-04-07
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 4 associated CPEs
Vendor Product Version / Range
huggingface transformers 5.0.0
huggingface transformers to 5.0.0 (exc)
huggingface transformers 5.0.0
huggingface transformers 5.0.0
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-502 The product deserializes untrusted data without sufficiently ensuring that the resulting data will be valid.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability exists in the HuggingFace Transformers library's Trainer class, specifically in the _load_rng_state() method. The method calls torch.load() without the weights_only=True parameter, which is necessary to safely load checkpoint files.

Because of this, when used with PyTorch versions below 2.6 (specifically 2.2 through 2.5), the safe_globals() context manager does not provide protection, leaving the torch.load() call vulnerable to arbitrary code execution.

An attacker can exploit this by supplying a malicious checkpoint file (like rng_state.pth) that executes arbitrary code when loaded. The issue is fixed in version v5.0.0rc3 by adding weights_only=True to the torch.load() call, which restricts loading to tensor weights only and prevents code execution.


How can this vulnerability impact me? :

If you use the affected versions of the HuggingFace Transformers library with PyTorch versions 2.2 through 2.5, an attacker could exploit this vulnerability by providing a malicious checkpoint file.

This could lead to arbitrary code execution on your system, potentially allowing the attacker to run harmful code, compromise your environment, steal data, or disrupt operations.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

This vulnerability can be detected by identifying if your system uses the HuggingFace Transformers library versions prior to v5.0.0rc3 in combination with PyTorch versions 2.2 through 2.5.

Specifically, detection involves checking whether the vulnerable `_load_rng_state()` method is present and if it calls `torch.load()` without the `weights_only=True` parameter.

You can also monitor for the presence or loading of suspicious checkpoint files such as `rng_state.pth` that could be malicious.

  • Check the installed version of transformers: `pip show transformers`
  • Check the installed version of PyTorch: `pip show torch`
  • Search your codebase for usage of `torch.load()` in `src/transformers/trainer.py` around line 3059 to verify if `weights_only=True` is missing.
  • Scan for suspicious checkpoint files (e.g., `rng_state.pth`) in your environment that might be loaded by the Trainer.

What immediate steps should I take to mitigate this vulnerability?

To mitigate this vulnerability immediately, upgrade the HuggingFace Transformers library to version v5.0.0rc3 or later, where the issue is fixed by adding the `weights_only=True` parameter to the `torch.load()` call.

If upgrading is not immediately possible, avoid loading untrusted or malicious checkpoint files, especially those containing `rng_state.pth`.

Ensure that your PyTorch version is updated to 2.6 or later, as the vulnerability affects only PyTorch versions below 2.6.

Review your code and dependencies to confirm that no unsafe deserialization of checkpoint files occurs without the `weights_only=True` safeguard.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

The vulnerability in the HuggingFace Transformers library allows arbitrary code execution through unsafe deserialization of malicious checkpoint files. This security flaw could potentially lead to unauthorized access or manipulation of data processed by the affected software.

Such unauthorized code execution risks compromising the confidentiality, integrity, and availability of data, which are core principles in compliance frameworks like GDPR and HIPAA.

Therefore, if exploited, this vulnerability could result in violations of these regulations by exposing sensitive personal or health information or by undermining system security controls required by these standards.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart