CVE-2026-25960
HIGHCVSS Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:L
Lifecycle Timeline
3Description
vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.
Analysis
vLLM 0.17.0 contains a Server-Side Request Forgery (SSRF) vulnerability where inconsistent URL parsing between the validation layer (urllib3) and the HTTP client (aiohttp/yarl) allows authenticated attackers to bypass SSRF protections and make requests to internal resources. An attacker with valid credentials can craft malicious URLs to access restricted endpoints or internal services that should be blocked by the SSRF mitigation implemented in version 0.15.1.
Sign in for full analysis, threat intelligence, and remediation guidance.
Remediation
Within 24 hours: Identify all vLLM instances in production and document their network connectivity and exposure. Within 7 days: Implement network segmentation to restrict vLLM outbound access and deploy WAF rules to block suspicious URL patterns in load_from_url_async requests. …
Sign in for detailed remediation steps.
Priority Score
Vendor Status
Share
External POC / Exploit Code
Leaving vuln.today
GHSA-v359-jj2v-j536