CVE-2025-25183

LOW
2025-02-07 [email protected]
2.6
CVSS 3.1

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:N/I:L/A:N
Attack Vector
Network
Attack Complexity
High
Privileges Required
Low
User Interaction
Required
Scope
Unchanged
Confidentiality
None
Integrity
Low
Availability
None

Lifecycle Timeline

3
Patch Released
Mar 31, 2026 - 21:13 nvd
Patch available
Analysis Generated
Mar 28, 2026 - 18:25 vuln.today
CVE Published
Feb 07, 2025 - 20:15 nvd
LOW 2.6

Description

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.

Analysis

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated low severity (CVSS 2.6), this vulnerability is remotely exploitable. No vendor patch available.

Technical Context

This vulnerability is classified under CWE-354. vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability. Affected products include: Vllm. Version information: version 0.7.2.

Affected Products

Vllm.

Remediation

No vendor patch is available at time of analysis. Monitor vendor advisories for updates. Apply vendor patches when available. Implement network segmentation and monitoring as interim mitigations.

Priority Score

13
Low Medium High Critical
KEV: 0
EPSS: +0.3
CVSS: +13
POC: 0

Share

CVE-2025-25183 vulnerability details – vuln.today

This site uses cookies essential for authentication and security. No tracking or analytics cookies are used. Privacy Policy