CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Lifecycle Timeline
4Tags
Description
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
Analysis
Remote code execution is possible in vLLM inference and serving engine versions 0.10.1 through 0.17.x due to hardcoded trust_remote_code=True settings in two model implementation files that override users' explicit --trust-remote-code=False security configuration. Attackers can exploit this by hosting malicious model repositories that execute arbitrary code when loaded by vLLM, even when users have intentionally disabled remote code trust for security. …
Sign in for full analysis, threat intelligence, and remediation guidance.
Remediation
Within 24 hours: audit all vLLM deployments to identify versions 0.10.1-0.17.x and document their exposure level and model sources. Within 7 days: immediately upgrade all affected instances to vLLM version 0.18.0 or later; if upgrades cannot be completed, implement the compensating controls listed below. …
Sign in for detailed remediation steps.
Priority Score
Vendor Status
Share
External POC / Exploit Code
Leaving vuln.today
EUVD-2026-16478