CVE-2026-27940

HIGH
2026-03-12 [email protected]
7.8
CVSS 3.1
Share

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Attack Vector
Local
Attack Complexity
Low
Privileges Required
None
User Interaction
Required
Scope
Unchanged
Confidentiality
High
Integrity
High
Availability
High

Lifecycle Timeline

2
Analysis Generated
Mar 12, 2026 - 19:57 vuln.today
CVE Published
Mar 12, 2026 - 17:16 nvd
HIGH 7.8

Description

llama.cpp is an inference of several LLM models in C/C++. Prior to b8146, the gguf_init_from_file_impl() in gguf.cpp is vulnerable to an Integer overflow, leading to an undersized heap allocation. Using the subsequent fread() writes 528+ bytes of attacker-controlled data past the buffer boundary. This is a bypass of a similar bug in the same file - CVE-2025-53630, but the fix overlooked some areas. This vulnerability is fixed in b8146.

Analysis

Local attackers can achieve heap buffer overflow in llama.cpp versions before b8146 through integer overflow in the GGUF file parsing function, enabling arbitrary code execution with high integrity and confidentiality impact. The vulnerability stems from undersized heap allocation followed by unvalidated writes of over 528 bytes of attacker-controlled data, bypassing a previous fix for the same component. …

Sign in for full analysis, threat intelligence, and remediation guidance.

Remediation

Within 24 hours: Identify all systems running llama.cpp and document affected versions below b8146. Within 7 days: Implement network segmentation to restrict access to llama.cpp inference services and disable processing of untrusted GGUF files if operationally feasible. …

Sign in for detailed remediation steps.

Priority Score

39
Low Medium High Critical
KEV: 0
EPSS: +0.0
CVSS: +39
POC: 0

Vendor Status

Share

CVE-2026-27940 vulnerability details – vuln.today

This site uses cookies essential for authentication and security. No tracking or analytics cookies are used. Privacy Policy