CVE-2026-1462
Received Received - Intake
Deserialization RCE in Keras TFSMLayer Bypasses Safe Mode

Publication date: 2026-04-13

Last updated on: 2026-04-13

Assigner: huntr.dev

Description
A vulnerability in the `TFSMLayer` class of the `keras` package, version 3.13.0, allows attacker-controlled TensorFlow SavedModels to be loaded during deserialization of `.keras` models, even when `safe_mode=True`. This bypasses the security guarantees of `safe_mode` and enables arbitrary attacker-controlled code execution during model inference under the victim's privileges. The issue arises due to the unconditional loading of external SavedModels, serialization of attacker-controlled file paths, and the lack of validation in the `from_config()` method.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-04-13
Last Modified
2026-04-13
Generated
2026-05-07
AI Q&A
2026-04-13
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
keras keras 3.13.0
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-502 The product deserializes untrusted data without sufficiently ensuring that the resulting data will be valid.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability exists in the `TFSMLayer` class of the Keras package version 3.13.0. It allows attacker-controlled TensorFlow SavedModels to be loaded during the deserialization of `.keras` model files, even when the `safe_mode` option is enabled. This bypasses the intended security protections of `safe_mode`.

The root cause is that the deserialization process unconditionally loads external SavedModels without validating them, and the `from_config()` method does not properly enforce safe mode. This enables an attacker to embed malicious code in a SavedModel that gets executed during model inference under the victim's privileges.


How does this vulnerability affect compliance with common standards and regulations (like GDPR, HIPAA)?:

This vulnerability allows arbitrary attacker-controlled code execution during model inference under the victim's privileges by bypassing security guarantees in the Keras package. Such unauthorized code execution can lead to unauthorized access, data breaches, or manipulation of sensitive data.

Consequently, this security flaw could impact compliance with common standards and regulations like GDPR and HIPAA, which mandate strict controls over data confidentiality, integrity, and security. Exploitation of this vulnerability may result in unauthorized disclosure or alteration of personal or protected health information, thereby violating these regulatory requirements.


How can this vulnerability impact me? :

This vulnerability can lead to arbitrary code execution on the victim's system when a malicious `.keras` model containing a crafted `TFSMLayer` is loaded. An attacker can exploit this to run any code with the same privileges as the user running the model.

The impact includes potential full system compromise, data theft, unauthorized access, or disruption of services, depending on the privileges of the user and the environment where the model is loaded.


How can this vulnerability be detected on my network or system? Can you suggest some commands?

Detection of this vulnerability involves identifying attempts to deserialize `.keras` model files containing the `TFSMLayer` class, especially when these files originate from untrusted sources.

Since the vulnerability arises during deserialization via the `from_config()` method, monitoring or logging calls to this method or attempts to load `.keras` files with `TFSMLayer` instances can help detect exploitation attempts.

There are no explicit commands provided in the resources, but you can use Python scripts to attempt loading `.keras` models and catch exceptions raised when `safe_mode=True` blocks deserialization. For example, running a script that loads `.keras` files and checks for `ValueError` exceptions related to unsafe deserialization can indicate presence of malicious models.

Additionally, monitoring network traffic for unexpected downloads or uploads of `.keras` files or TensorFlow SavedModels from untrusted sources may help detect attempts to exploit this vulnerability.


What immediate steps should I take to mitigate this vulnerability?

To mitigate this vulnerability, ensure that the Keras library is updated to a version that includes the fix for CVE-2026-1462.

The fix introduces a safe mode in the `TFSMLayer.from_config()` method that blocks deserialization of `TFSMLayer` instances when `safe_mode=True` (the default).

Do not disable safe mode unless you explicitly trust the source of the `.keras` models being loaded.

  • Apply the patch or update Keras to a version that includes the safe mode enforcement.
  • Avoid loading `.keras` model files from untrusted or unknown sources.
  • If you maintain code that deserializes `.keras` models, ensure it respects the safe mode setting and does not override it to disable safety checks.
  • Monitor and audit model loading operations to detect any attempts to bypass safe mode.

Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart