CVE-2025-9906
Unknown Unknown - Not Provided
BaseFortify

Publication date: 2025-09-19

Last updated on: 2025-09-23

Assigner: Google Inc.

Description
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .keras model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special config.json (a file within the .keras archive) that will invoke keras.config.enable_unsafe_deserialization() to disable safe mode. Once safe mode is disable, one can use the Lambda layer feature of keras, which allows arbitrary Python code in the form of pickled code. Both can appear in the same archive. Simply the keras.config.enable_unsafe_deserialization() needs to appear first in the archive and the Lambda with arbitrary code needs to be second.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2025-09-19
Last Modified
2025-09-23
Generated
2026-05-07
AI Q&A
2025-09-19
EPSS Evaluated
2026-05-05
NVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
keras keras From 3.0.0 (inc) to 3.11.0 (exc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-502 The product deserializes untrusted data without sufficiently ensuring that the resulting data will be valid.
Attack-Flow Graph
AI Powered Q&A
How can this vulnerability impact me? :

This vulnerability can lead to arbitrary code execution on the system loading the malicious Keras model. This means an attacker could run any code they want, potentially leading to system compromise, data theft, or disruption of services.


Can you explain this vulnerability to me?

This vulnerability involves the Keras Model.load_model method, which can be exploited to execute arbitrary code even when safe_mode=True. An attacker can create a specially crafted .keras model archive containing a config.json file that disables safe mode by invoking keras.config.enable_unsafe_deserialization(). After safe mode is disabled, the attacker can use the Lambda layer feature in Keras, which allows arbitrary Python code via pickled code, to execute malicious code when the model is loaded.


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart