CVE-2025-8747
Unknown Unknown - Not Provided
BaseFortify

Publication date: 2025-08-11

Last updated on: 2025-08-14

Assigner: Google Inc.

Description
A safe mode bypass vulnerability in the `Model.load_model` method in Keras versions 3.0.0 through 3.10.0 allows an attacker to achieve arbitrary code execution by convincing a user to load a specially crafted `.keras` model archive.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2025-08-11
Last Modified
2025-08-14
Generated
2026-05-07
AI Q&A
2025-08-11
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
keras keras From 3.0.0 (inc) to 3.10.0 (inc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-502 The product deserializes untrusted data without sufficiently ensuring that the resulting data will be valid.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

CVE-2025-8747 is a critical security flaw in Keras versions 3.0.0 through 3.10.0 where the safe_mode bypass vulnerability in the Model.load_model method allows attackers to execute arbitrary code. This happens because Keras deserializes Lambda layers in saved models, which can execute existing Python functions during model loading. Attackers can craft malicious .keras model files that exploit this behavior to run harmful commands or download and overwrite files on the victim's system, bypassing the intended safe_mode protections. [1]


How can this vulnerability impact me? :

This vulnerability can lead to arbitrary code execution on your system when you load a specially crafted Keras model. An attacker could execute shell commands, download malicious files, overwrite critical files such as SSH authorized_keys, and gain persistent or remote access to your system. This poses a significant security risk, potentially compromising system integrity, confidentiality, and availability. [1]


How can this vulnerability be detected on my network or system? Can you suggest some commands?

Detection involves monitoring for suspicious loading of Keras model files (.keras) that may contain malicious Lambda layers. Since the exploit triggers arbitrary code execution during model loading, you can detect it by scanning model files for Lambda layers invoking unusual or dangerous functions such as keras.utils.get_file or heapq.nsmallest with suspicious parameters. Additionally, monitoring system logs for unexpected file downloads or shell command executions triggered by Python processes loading Keras models can help. Specific commands might include scanning model JSON files for suspicious Lambda layer configurations (e.g., using grep or jq), and monitoring running Python processes or network activity for unexpected behavior. However, no exact commands are provided in the resources. [1]


What immediate steps should I take to mitigate this vulnerability?

Immediate mitigation steps include: 1) Avoid loading untrusted or unaudited Keras model files, especially those containing Lambda layers. 2) Upgrade Keras to the latest patched version that includes the fix merged on June 29, 2025, which restricts deserialization to only safe Keras objects (KerasSaveable subclasses), preventing arbitrary code execution. 3) Implement strict sandboxing and execution controls around model loading to limit potential damage. 4) Disable any unsafe deserialization features such as 'enable_unsafe_deserialization'. 5) Perform security scanning of ML models before deployment. These steps reduce the risk of exploitation by limiting deserialization to vetted classes and preventing execution of arbitrary functions during model loading. [1, 2]


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart