CVE-2025-46570
LOWCVSS Vector
CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:L/I:N/A:N
Lifecycle Timeline
3Description
vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.
Analysis
vLLM is an inference and serving engine for large language models (LLMs). Rated low severity (CVSS 2.6), this vulnerability is remotely exploitable.
Technical Context
This vulnerability is classified under CWE-208. vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0. Affected products include: Vllm. Version information: version 0.9.0.
Affected Products
Vllm.
Remediation
A vendor patch is available. Apply the latest security update as soon as possible. Apply vendor patches when available. Implement network segmentation and monitoring as interim mitigations.
Priority Score
Share
External POC / Exploit Code
Leaving vuln.today