CVE-2025-62372
HIGHCVSS Vector
CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:H/E:X/CR:X/IR:X/AR:X/MAV:X/MAC:X/MAT:X/MPR:X/MUI:X/MVC:X/MVI:X/MVA:X/MSC:X/MSI:X/MSA:X/S:X/AU:X/R:X/V:X/RE:X/U:X
Lifecycle Timeline
3Tags
Description
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.
Analysis
vLLM is an inference and serving engine for large language models (LLMs). Rated high severity (CVSS 8.3), this vulnerability is remotely exploitable, low attack complexity.
Technical Context
This vulnerability is classified under CWE-129. vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1. Affected products include: Vllm. Version information: version 0.5.5.
Affected Products
Vllm.
Remediation
A vendor patch is available. Apply the latest security update as soon as possible. Apply vendor patches when available. Implement network segmentation and monitoring as interim mitigations.
Priority Score
Vendor Status
Share
External POC / Exploit Code
Leaving vuln.today