CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Lifecycle Timeline
4Description
llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the `ggml_nbytes` function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes `ggml_nbytes` to return a significantly smaller size than required (e.g., 4MB instead of Exabytes), leading to a heap-based buffer overflow when the application subsequently processes the tensor. This vulnerability allows potential Remote Code Execution (RCE) via memory corruption. b7824 contains a fix.
Analysis
Remote code execution in llama.cpp prior to commit b7824 is possible through a crafted GGUF file that exploits an integer overflow in the `ggml_nbytes` function, causing heap buffer overflow during tensor processing. An attacker can bypass memory validation by specifying tensor dimensions that cause the size calculation to underflow dramatically, allowing memory corruption and potential code execution. …
Sign in for full analysis, threat intelligence, and remediation guidance.
Remediation
Within 24 hours: Identify all systems and applications using llama.cpp and create an inventory; communicate to end-users to avoid opening GGUF files from untrusted sources. Within 7 days: Implement file validation controls and network segmentation to restrict GGUF file processing to isolated environments; monitor vendor advisories for patch availability. …
Sign in for detailed remediation steps.
Priority Score
Vendor Status
Debian
| Release | Status | Fixed Version | Urgency |
|---|---|---|---|
| sid | vulnerable | 8064+dfsg-2 | - |
| (unstable) | fixed | (unfixed) | - |
Share
External POC / Exploit Code
Leaving vuln.today
EUVD-2026-14668