CVE-2026-22773
MEDIUMCVSS Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
Lifecycle Timeline
4Description
vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
Analysis
Vllm versions up to 0.12.0 is affected by allocation of resources without limits or throttling (CVSS 6.5).
Sign in for full analysis, threat intelligence, and remediation guidance.
Remediation
Within 30 days: Identify affected systems running versions from 0.6.4 to and apply vendor patches as part of regular patch cycle. Monitor vendor channels for patch availability.
Sign in for detailed remediation steps.
Priority Score
Vendor Status
Share
External POC / Exploit Code
Leaving vuln.today
GHSA-grg2-63fw-f2qr