CVE-2025-58446
BaseFortify
Publication date: 2025-09-06
Last updated on: 2025-09-18
Assigner: GitHub, Inc.
Description
Description
CVSS Scores
EPSS Scores
| Probability: | |
| Percentile: |
Meta Information
Affected Vendors & Products
| Vendor | Product | Version / Range |
|---|---|---|
| mlc-ai | xgrammar | 0.1.23 |
Helpful Resources
Exploitability
| CWE ID | Description |
|---|---|
| CWE-770 | The product allocates a reusable resource or group of resources on behalf of an actor without imposing any intended restrictions on the size or number of resources that can be allocated. |
Attack-Flow Graph
AI Powered Q&A
Can you explain this vulnerability to me?
This vulnerability is a denial of service (DoS) issue in the xgrammar library version 0.1.23. It arises from a regression in the Earley parser and the grammar optimizer that causes extremely slow processing of very large enum grammars (greater than 100,000 characters). Attackers can exploit this inefficiency by submitting large grammars that take minutes to parse, overwhelming the system and causing a denial of service. The problem was fixed in version 0.1.24 by optimizing the grammar optimizer and disabling some slow optimizations for large grammars. [1]
How can this vulnerability impact me? :
If you use the vulnerable xgrammar version 0.1.23, an attacker can exploit this vulnerability to cause denial of service by submitting very large enum grammars that take excessive time to parse. This can overwhelm your system's resources, degrade performance, and potentially make your model services unavailable until the parsing completes or the system recovers. Upgrading to version 0.1.24 mitigates this risk. [1]
How can this vulnerability be detected on my network or system? Can you suggest some commands?
This vulnerability can be detected by identifying if the xgrammar library version 0.1.23 is used and if it processes very large enum grammars (greater than 100,000 characters). A practical detection method is to test parsing of a large JSON schema with a large enum (e.g., 10,000 entries) and observe if the parsing takes an unusually long time (minutes). There is a proof of concept (PoC) that generates such a large enum schema and tests parsing performance. Specific commands would involve running the PoC code snippet that creates and parses the large enum schema using xgrammar 0.1.23 and measuring the processing time to detect the vulnerability. [1]
What immediate steps should I take to mitigate this vulnerability?
The immediate mitigation step is to upgrade the xgrammar library to version 0.1.24 or later, where the vulnerability is fixed by optimizing the grammar optimizer and disabling slow optimizations for large grammars. Until the upgrade is applied, avoid processing very large enum grammars (>100k characters) to prevent denial of service. Additionally, monitoring and limiting input grammar sizes can help reduce exposure. [1]