CVE-2026-0897
Unknown Unknown - Not Provided
BaseFortify

Publication date: 2026-01-15

Last updated on: 2026-01-15

Assigner: Google Inc.

Description
Allocation of Resources Without Limits or Throttling in the HDF5 weight loading component in Google Keras 3.0.0 through 3.13.0 on all platforms allows a remote attacker to cause a Denial of Service (DoS) through memory exhaustion and a crash of the Python interpreter via a crafted .keras archive containing a valid model.weights.h5 file whose dataset declares an extremely large shape.
CVSS Scores
EPSS Scores
Probability:
Percentile:
Meta Information
Published
2026-01-15
Last Modified
2026-01-15
Generated
2026-05-07
AI Q&A
2026-01-16
EPSS Evaluated
2026-05-05
NVD
EUVD
Affected Vendors & Products
Showing 1 associated CPE
Vendor Product Version / Range
keras keras From 3.0.0 (inc) to 3.13.0 (inc)
Helpful Resources
Exploitability
CWE
CWE Icon
KEV
KEV Icon
CWE ID Description
CWE-770 The product allocates a reusable resource or group of resources on behalf of an actor without imposing any intended restrictions on the size or number of resources that can be allocated.
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?

This vulnerability in Google Keras (versions 3.0.0 through 3.13.0) allows a remote attacker to cause a Denial of Service (DoS) by exploiting the HDF5 weight loading component. Specifically, an attacker can craft a .keras archive containing a model.weights.h5 file with a dataset that declares an extremely large or malformed shape (known as an "HDF5 shape bomb"). When Keras attempts to load this file, it triggers excessive memory allocation, exhausting system resources and crashing the Python interpreter. The root cause is insufficient validation of dataset shapes and metadata during model loading. [1]


How can this vulnerability impact me? :

This vulnerability can impact you by causing a Denial of Service (DoS) on systems running vulnerable versions of Keras. An attacker can remotely trigger excessive memory consumption by providing a specially crafted .keras model file, leading to memory exhaustion and crashing the Python interpreter. This can disrupt services, halt machine learning workflows, and potentially cause downtime or loss of availability in applications relying on Keras for model loading. [1]


How can this vulnerability be detected on my network or system? Can you suggest some commands?

This vulnerability can be detected by monitoring for attempts to load maliciously crafted .keras model files containing HDF5 datasets with extremely large or malformed shapes that cause excessive memory allocation and crashes. Detection involves inspecting .keras files for suspiciously large dataset shapes or metadata. While no specific commands are provided, you can use HDF5 inspection tools such as 'h5dump' to examine the dataset shapes within model.weights.h5 files for unusually large dimensions or tensor ranks greater than 64. Additionally, monitoring Python interpreter crashes or high memory usage during model loading can indicate exploitation attempts. [1]


What immediate steps should I take to mitigate this vulnerability?

Immediate mitigation steps include updating Keras to a version that includes the patch addressing CVE-2026-0897, which enforces strict validation on HDF5 dataset metadata and imposes a maximum memory allocation limit of 1 GiB during model loading. Avoid loading untrusted or unauthenticated .keras model files. Implement defensive checks on dataset shapes before loading models to prevent resource exhaustion. Applying the patch from the KerasFileEditor component ensures rejection of hostile shapes and prevents Denial of Service via memory exhaustion. [1]


Ask Our AI Assistant
Need more information? Ask your question to get an AI reply (Powered by our expertise)
0/70
EPSS Chart