CVE-2026-3071
Deserialization Vulnerability in Flair LanguageModel Enables Code Execution
Publication date: 2026-02-26
Last updated on: 2026-02-27
Assigner: HiddenLayer
Description
Description
CVSS Scores
EPSS Scores
| Probability: | |
| Percentile: |
Meta Information
Affected Vendors & Products
| Vendor | Product | Version / Range |
|---|---|---|
| flair | flair | From 0.4.1 (inc) |
Helpful Resources
Exploitability
| CWE ID | Description |
|---|---|
| CWE-502 | The product deserializes untrusted data without sufficiently ensuring that the resulting data will be valid. |
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?
[{'type': 'paragraph', 'content': "CVE-2026-3071 is a critical vulnerability in the Flair NLP library, specifically in the LanguageModel class's method for loading language models. The vulnerability arises because the method uses PyTorch's torch.load() function with weights_only set to False, which relies on Python's pickle module for deserialization. This means that if a maliciously crafted model file is loaded, it can execute arbitrary code during the deserialization process."}, {'type': 'paragraph', 'content': 'An attacker who can control the model file path can exploit this vulnerability to achieve remote code execution (RCE) on the system loading the model. The flaw affects Flair versions from 0.4.1 through the latest release.'}] [1]
How can this vulnerability impact me? :
This vulnerability can have severe impacts because it allows an attacker to execute arbitrary code remotely on the affected system by loading a malicious model file. This can lead to full system compromise, including unauthorized access, data theft, data corruption, or disruption of services.
Since the vulnerability has a high CVSS score of 8.4 with high impact on confidentiality, integrity, and availability, it poses a significant security risk to any environment using the vulnerable Flair versions.
How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:
I don't know
How can this vulnerability be detected on my network or system? Can you suggest some commands?
This vulnerability can be detected by identifying if your system or network is using the Flair NLP library versions 0.4.1 through the latest release, specifically if the `LanguageModel.load_language_model` method is called with model files that could be malicious.
To detect potential exploitation attempts, you can monitor for suspicious file loads or executions related to Flair model files, especially those that invoke `torch.load()` with `weights_only=false`.
There are no specific commands provided in the resources, but a general approach includes:
- Checking the version of Flair installed: `pip show flair` or `pip list | grep flair`
- Searching for usage of `load_language_model` in your codebase to identify if untrusted model files are loaded.
- Monitoring logs or system activity for unexpected executions or file accesses related to Flair model files.
- Using file integrity monitoring to detect unexpected or suspicious model files (e.g., pickle files) being introduced.
What immediate steps should I take to mitigate this vulnerability?
[{'type': 'paragraph', 'content': 'Immediate mitigation steps include:'}, {'type': 'list_item', 'content': 'Avoid loading untrusted or unauthenticated Flair model files, especially those that could be malicious pickle files.'}, {'type': 'list_item', 'content': "If possible, update Flair to a version where this vulnerability is fixed (check Flair's official repository or advisories for patches)."}, {'type': 'list_item', 'content': 'Restrict file system permissions and access controls to prevent attackers from placing malicious model files in locations accessible to the application.'}, {'type': 'list_item', 'content': 'Review and audit any code that calls `LanguageModel.load_language_model` to ensure it only loads trusted models.'}, {'type': 'list_item', 'content': 'Consider sandboxing or isolating the environment where Flair models are loaded to limit the impact of potential code execution.'}] [1]