EUVD-2025-21070

| CVE-2025-53630 HIGH
2025-07-10 [email protected]
8.9
CVSS 4.0
Share

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N/E:P
Attack Vector
Network
Attack Complexity
Low
Privileges Required
None
User Interaction
None

Lifecycle Timeline

4
Patch Released
Mar 31, 2026 - 21:13 nvd
Patch available
Analysis Generated
Mar 16, 2026 - 06:52 vuln.today
EUVD ID Assigned
Mar 16, 2026 - 06:52 euvd
EUVD-2025-21070
CVE Published
Jul 10, 2025 - 20:15 nvd
HIGH 8.9

Description

llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.

Analysis

CVE-2025-53630 is a critical integer overflow vulnerability in llama.cpp's GGUF file parsing function that can trigger heap out-of-bounds read/write operations, potentially leading to information disclosure, memory corruption, or remote code execution. The vulnerability affects llama.cpp versions prior to commit 26a48ad699d50b6268900062661bd22f3e792579, with a CVSS score of 8.9 indicating high severity. The network-accessible attack vector (AV:N) combined with low complexity (AC:L) means remote attackers can exploit this without authentication by supplying malformed GGUF model files.

Technical Context

llama.cpp is a C/C++ inference engine for Large Language Models (LLMs) that supports multiple model architectures through GGUF (GGML Unified Format) file parsing. The vulnerability exists in the gguf_init_from_file_impl function within ggml/src/gguf.cpp, which handles deserialization and validation of GGUF model files. CWE-122 (Heap-based Buffer Overflow) combined with integer overflow indicates that size calculations in the file header parsing logic fail to properly validate field values, allowing attackers to cause heap allocations smaller than required or to trigger out-of-bounds access. When processing attacker-controlled GGUF files, integer overflow in size computations causes the heap buffer management to lose track of actual allocation boundaries, enabling both read (information leak) and write (memory corruption) primitives.

Affected Products

llama.cpp project (ggml library) - all versions prior to commit 26a48ad699d50b6268900062661bd22f3e792579. Affected CPE would be: cpe:2.3:a:llama_project:llama.cpp:*:*:*:*:*:*:*:* (versions < fix commit). Specific downstream applications affected include: (1) Any LLM inference service using llama.cpp as the backend; (2) Local AI deployment tools that bundle llama.cpp; (3) Chatbot frameworks (e.g., LocalAI, Ollama, and similar) if they use vulnerable versions; (4) ML development environments and Jupyter notebook plugins using llama.cpp. The vulnerability affects both CPU and GPU inference variants since the parsing occurs before architecture-specific execution.

Remediation

Immediate remediation: (1) Update llama.cpp to a version after commit 26a48ad699d50b6268900062661bd22f3e792579; (2) Rebuild any downstream applications (chatbots, inference services, SDKs) that statically link llama.cpp; (3) If source-based deployment, cherry-pick the fix commit or apply the security patch. Short-term mitigations pending patching: (1) Restrict GGUF file sources to trusted model repositories only (e.g., HuggingFace official models); (2) Validate GGUF file signatures/checksums before loading if possible; (3) Run llama.cpp services in isolated containers with restricted memory access and seccomp profiles; (4) Implement strict file type validation at the application layer before passing to llama.cpp. For organizations using dependent projects, verify those projects have updated to patched llama.cpp versions.

Priority Score

45
Low Medium High Critical
KEV: 0
EPSS: +0.1
CVSS: +44
POC: 0

Vendor Status

Ubuntu

Priority: Medium
llama.cpp
Release Status Version
jammy DNE -
noble DNE -
plucky DNE -
upstream needs-triage -
questing needs-triage -

Debian

Bug #1109124
ggml
Release Status Fixed Version Urgency
sid fixed 0.9.7-2 -
(unstable) fixed 0.0~git20250711.b6d2ebd-1 -
llama.cpp
Release Status Fixed Version Urgency
sid fixed 8064+dfsg-2 -
(unstable) fixed 5882+dfsg-1 unimportant

Share

EUVD-2025-21070 vulnerability details – vuln.today

This site uses cookies essential for authentication and security. No tracking or analytics cookies are used. Privacy Policy