CVE-2026-22778

CRITICAL
9.8
CVSS 3.1
Share

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Attack Vector
Network
Attack Complexity
Low
Privileges Required
None
User Interaction
None
Scope
Unchanged
Confidentiality
High
Integrity
High
Availability
High

Lifecycle Timeline

3
Analysis Generated
Mar 12, 2026 - 22:01 vuln.today
Patch Released
Feb 23, 2026 - 18:19 nvd
Patch available
CVE Published
Feb 02, 2026 - 23:16 nvd
CRITICAL 9.8

Description

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.

Analysis

Information exposure in vLLM inference engine versions 0.8.3 to before 0.14.1. Invalid image requests to the multimodal endpoint cause sensitive data logging. …

Sign in for full analysis, threat intelligence, and remediation guidance.

Remediation

Within 24 hours: Identify all systems running Vllm versions up to 0.14.1 and assess what sensitive data may have been logged. Within 7 days: Apply vendor patch to upgrade to Vllm 0.14.2 or later across all affected environments. …

Sign in for detailed remediation steps.

Priority Score

49
Low Medium High Critical
KEV: 0
EPSS: +0.1
CVSS: +49
POC: 0

Vendor Status

Share

CVE-2026-22778 vulnerability details – vuln.today

This site uses cookies essential for authentication and security. No tracking or analytics cookies are used. Privacy Policy