Vllm

27 CVEs product

Monthly

CVE-2026-25960 HIGH PATCH This Week

vLLM 0.17.0 contains a Server-Side Request Forgery (SSRF) vulnerability where inconsistent URL parsing between the validation layer (urllib3) and the HTTP client (aiohttp/yarl) allows authenticated attackers to bypass SSRF protections and make requests to internal resources. An attacker with valid credentials can craft malicious URLs to access restricted endpoints or internal services that should be blocked by the SSRF mitigation implemented in version 0.15.1.

SSRF Vllm Redhat
NVD GitHub VulDB
CVSS 3.1
7.1
EPSS
0.0%
CVE-2026-22778 CRITICAL PATCH Act Now

Information exposure in vLLM inference engine versions 0.8.3 to before 0.14.1. Invalid image requests to the multimodal endpoint cause sensitive data logging. Patch available.

RCE Heap Overflow AI / ML Vllm Redhat
NVD GitHub
CVSS 3.1
9.8
EPSS
0.1%
CVE-2026-24779 HIGH POC PATCH This Week

vLLM before version 0.14.1 contains a server-side request forgery vulnerability in the MediaConnector class where inconsistent URL parsing between libraries allows attackers to bypass host restrictions and force the server to make arbitrary requests to internal network resources. Public exploit code exists for this vulnerability, which poses significant risk in containerized environments where a compromised vLLM instance could be leveraged to access restricted internal systems. The vulnerability affects users running vLLM's multimodal features with untrusted input.

Python Industrial SSRF Denial Of Service AI / ML +2
NVD GitHub
CVSS 3.1
7.1
EPSS
0.0%
CVE-2026-22807 HIGH PATCH This Week

vLLM is an inference and serving engine for large language models (LLMs). [CVSS 8.8 HIGH]

Python AI / ML Vllm Hugging Face Redhat
NVD GitHub
CVSS 3.1
8.8
EPSS
0.1%
CVE-2026-22773 MEDIUM POC PATCH This Month

Vllm versions up to 0.12.0 is affected by allocation of resources without limits or throttling (CVSS 6.5).

Denial Of Service AI / ML Vllm Redhat
NVD GitHub
CVSS 3.1
6.5
EPSS
0.0%
CVE-2025-66448 HIGH PATCH This Week

vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.

RCE Python Code Injection Debian Vllm +1
NVD GitHub
CVSS 3.1
7.1
EPSS
0.2%
CVE-2025-62426 MEDIUM PATCH This Month

vLLM is an inference and serving engine for large language models (LLMs). Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. This Allocation of Resources Without Limits vulnerability could allow attackers to exhaust system resources through uncontrolled allocation.

Denial Of Service Vllm Redhat
NVD GitHub
CVSS 3.1
6.5
EPSS
0.1%
CVE-2025-62372 HIGH PATCH This Month

vLLM is an inference and serving engine for large language models (LLMs). Rated high severity (CVSS 8.3), this vulnerability is remotely exploitable, low attack complexity.

Denial Of Service Vllm Redhat
NVD GitHub
CVSS 4.0
8.3
EPSS
0.1%
CVE-2025-62164 HIGH PATCH This Month

vLLM is an inference and serving engine for large language models (LLMs). Rated high severity (CVSS 8.8), this vulnerability is remotely exploitable, low attack complexity.

Buffer Overflow RCE Vllm Pytorch AI / ML +1
NVD GitHub
CVSS 3.1
8.8
EPSS
0.1%
CVE-2025-48956 HIGH PATCH This Month

vLLM is an inference and serving engine for large language models (LLMs). Rated high severity (CVSS 7.5), this vulnerability is remotely exploitable, no authentication required, low attack complexity. This Uncontrolled Resource Consumption vulnerability could allow attackers to cause denial of service by exhausting system resources.

Denial Of Service Vllm Redhat
NVD GitHub
CVSS 3.1
7.5
EPSS
0.3%
CVE-2025-48944 MEDIUM POC PATCH This Week

vLLM is an inference and serving engine for large language models (LLMs). Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. Public exploit code available and no vendor patch available.

Denial Of Service Vllm Redhat
NVD GitHub
CVSS 3.1
6.5
EPSS
0.3%
CVE-2025-48943 MEDIUM PATCH This Month

vLLM is an inference and serving engine for large language models (LLMs). Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity.

Denial Of Service Vllm Redhat
NVD GitHub
CVSS 3.1
6.5
EPSS
0.2%
CVE-2025-48942 MEDIUM POC PATCH This Week

vLLM is an inference and serving engine for large language models (LLMs). Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. Public exploit code available.

Information Disclosure Vllm Redhat
NVD GitHub
CVSS 3.1
6.5
EPSS
0.2%
CVE-2025-48887 MEDIUM POC PATCH This Week

vLLM, an inference and serving engine for large language models (LLMs), has a Regular Expression Denial of Service (ReDoS) vulnerability in the file. Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. Public exploit code available.

Denial Of Service Vllm Redhat
NVD GitHub
CVSS 3.1
6.5
EPSS
0.3%
CVE-2025-46722 MEDIUM PATCH Monitor

vLLM is an inference and serving engine for large language models (LLMs). Rated medium severity (CVSS 4.2), this vulnerability is remotely exploitable.

Information Disclosure Vllm Redhat
NVD GitHub
CVSS 3.1
4.2
EPSS
0.2%
CVE-2025-46570 LOW PATCH Monitor

vLLM is an inference and serving engine for large language models (LLMs). Rated low severity (CVSS 2.6), this vulnerability is remotely exploitable.

Information Disclosure Vllm
NVD GitHub
CVSS 3.1
2.6
EPSS
0.2%
CVE-2025-47277 CRITICAL POC PATCH Act Now

vLLM, an inference and serving engine for large language models (LLMs), has an issue in versions 0.6.5 through 0.8.4 that ONLY impacts environments using the `PyNcclPipe` KV cache transfer. Rated critical severity (CVSS 9.8), this vulnerability is remotely exploitable, no authentication required, low attack complexity. Public exploit code available.

Deserialization Vllm Pytorch AI / ML Redhat
NVD GitHub
CVSS 3.1
9.8
EPSS
0.9%
CVE-2025-30165 HIGH PATCH This Week

vLLM is an inference and serving engine for large language models. Rated high severity (CVSS 8.0), this vulnerability is low attack complexity. No vendor patch available.

RCE Deserialization Vllm Redhat
NVD GitHub
CVSS 3.1
8.0
EPSS
1.3%
CVE-2025-46560 MEDIUM POC PATCH This Month

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. Public exploit code available and no vendor patch available.

Denial Of Service Vllm Redhat
NVD GitHub
CVSS 3.1
6.5
EPSS
0.6%
CVE-2025-32444 CRITICAL POC PATCH Act Now

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated critical severity (CVSS 10.0), this vulnerability is remotely exploitable, no authentication required, low attack complexity. Public exploit code available.

RCE Deserialization Vllm Redhat
NVD GitHub
CVSS 3.1
10.0
EPSS
2.5%
CVE-2025-30202 HIGH POC PATCH This Week

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated high severity (CVSS 7.5), this vulnerability is remotely exploitable, no authentication required, low attack complexity. Public exploit code available.

Information Disclosure Denial Of Service Vllm Redhat
NVD GitHub
CVSS 3.1
7.5
EPSS
0.4%
CVE-2024-9053 CRITICAL POC Act Now

vllm-project vllm version 0.6.0 contains a vulnerability in the AsyncEngineRPCServer() RPC server entrypoints. Rated critical severity (CVSS 9.8), this vulnerability is remotely exploitable, no authentication required, low attack complexity. Public exploit code available and no vendor patch available.

RCE Deserialization Vllm Redhat
NVD
CVSS 3.1
9.8
EPSS
2.2%
CVE-2024-11041 CRITICAL POC Act Now

vllm-project vllm version v0.6.2 contains a vulnerability in the MessageQueue.dequeue() API function. Rated critical severity (CVSS 9.8), this vulnerability is remotely exploitable, no authentication required, low attack complexity. Public exploit code available and no vendor patch available.

RCE Deserialization Vllm Redhat
NVD
CVSS 3.0
9.8
EPSS
1.3%
CVE-2025-29783 CRITICAL PATCH Act Now

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated critical severity (CVSS 9.0), this vulnerability is low attack complexity. This Deserialization of Untrusted Data vulnerability could allow attackers to execute arbitrary code through malicious serialized objects.

RCE Deserialization Vllm Redhat
NVD GitHub
CVSS 3.1
9.0
EPSS
1.7%
CVE-2025-29770 MEDIUM PATCH This Month

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. This Allocation of Resources Without Limits vulnerability could allow attackers to exhaust system resources through uncontrolled allocation.

Denial Of Service Vllm Redhat
NVD GitHub
CVSS 3.1
6.5
EPSS
0.3%
CVE-2025-25183 LOW PATCH Monitor

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated low severity (CVSS 2.6), this vulnerability is remotely exploitable. No vendor patch available.

Python Information Disclosure Vllm
NVD GitHub
CVSS 3.1
2.6
EPSS
0.3%
CVE-2025-24357 HIGH PATCH This Month

vLLM is a library for LLM inference and serving. Rated high severity (CVSS 7.5), this vulnerability is remotely exploitable, no authentication required. This Deserialization of Untrusted Data vulnerability could allow attackers to execute arbitrary code through malicious serialized objects.

RCE Deserialization Vllm Redhat
NVD GitHub
CVSS 3.1
7.5
EPSS
1.0%
CVE-2026-25960
EPSS 0% CVSS 7.1
HIGH PATCH This Week

vLLM 0.17.0 contains a Server-Side Request Forgery (SSRF) vulnerability where inconsistent URL parsing between the validation layer (urllib3) and the HTTP client (aiohttp/yarl) allows authenticated attackers to bypass SSRF protections and make requests to internal resources. An attacker with valid credentials can craft malicious URLs to access restricted endpoints or internal services that should be blocked by the SSRF mitigation implemented in version 0.15.1.

SSRF Vllm Redhat
NVD GitHub VulDB
CVE-2026-22778
EPSS 0% CVSS 9.8
CRITICAL PATCH Act Now

Information exposure in vLLM inference engine versions 0.8.3 to before 0.14.1. Invalid image requests to the multimodal endpoint cause sensitive data logging. Patch available.

RCE Heap Overflow AI / ML +2
NVD GitHub
CVE-2026-24779
EPSS 0% CVSS 7.1
HIGH POC PATCH This Week

vLLM before version 0.14.1 contains a server-side request forgery vulnerability in the MediaConnector class where inconsistent URL parsing between libraries allows attackers to bypass host restrictions and force the server to make arbitrary requests to internal network resources. Public exploit code exists for this vulnerability, which poses significant risk in containerized environments where a compromised vLLM instance could be leveraged to access restricted internal systems. The vulnerability affects users running vLLM's multimodal features with untrusted input.

Python Industrial SSRF +4
NVD GitHub
CVE-2026-22807
EPSS 0% CVSS 8.8
HIGH PATCH This Week

vLLM is an inference and serving engine for large language models (LLMs). [CVSS 8.8 HIGH]

Python AI / ML Vllm +2
NVD GitHub
CVE-2026-22773
EPSS 0% CVSS 6.5
MEDIUM POC PATCH This Month

Vllm versions up to 0.12.0 is affected by allocation of resources without limits or throttling (CVSS 6.5).

Denial Of Service AI / ML Vllm +1
NVD GitHub
CVE-2025-66448
EPSS 0% CVSS 7.1
HIGH PATCH This Week

vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.

RCE Python Code Injection +3
NVD GitHub
CVE-2025-62426
EPSS 0% CVSS 6.5
MEDIUM PATCH This Month

vLLM is an inference and serving engine for large language models (LLMs). Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. This Allocation of Resources Without Limits vulnerability could allow attackers to exhaust system resources through uncontrolled allocation.

Denial Of Service Vllm Redhat
NVD GitHub
CVE-2025-62372
EPSS 0% CVSS 8.3
HIGH PATCH This Month

vLLM is an inference and serving engine for large language models (LLMs). Rated high severity (CVSS 8.3), this vulnerability is remotely exploitable, low attack complexity.

Denial Of Service Vllm Redhat
NVD GitHub
CVE-2025-62164
EPSS 0% CVSS 8.8
HIGH PATCH This Month

vLLM is an inference and serving engine for large language models (LLMs). Rated high severity (CVSS 8.8), this vulnerability is remotely exploitable, low attack complexity.

Buffer Overflow RCE Vllm +3
NVD GitHub
CVE-2025-48956
EPSS 0% CVSS 7.5
HIGH PATCH This Month

vLLM is an inference and serving engine for large language models (LLMs). Rated high severity (CVSS 7.5), this vulnerability is remotely exploitable, no authentication required, low attack complexity. This Uncontrolled Resource Consumption vulnerability could allow attackers to cause denial of service by exhausting system resources.

Denial Of Service Vllm Redhat
NVD GitHub
CVE-2025-48944
EPSS 0% CVSS 6.5
MEDIUM POC PATCH This Week

vLLM is an inference and serving engine for large language models (LLMs). Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. Public exploit code available and no vendor patch available.

Denial Of Service Vllm Redhat
NVD GitHub
CVE-2025-48943
EPSS 0% CVSS 6.5
MEDIUM PATCH This Month

vLLM is an inference and serving engine for large language models (LLMs). Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity.

Denial Of Service Vllm Redhat
NVD GitHub
CVE-2025-48942
EPSS 0% CVSS 6.5
MEDIUM POC PATCH This Week

vLLM is an inference and serving engine for large language models (LLMs). Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. Public exploit code available.

Information Disclosure Vllm Redhat
NVD GitHub
CVE-2025-48887
EPSS 0% CVSS 6.5
MEDIUM POC PATCH This Week

vLLM, an inference and serving engine for large language models (LLMs), has a Regular Expression Denial of Service (ReDoS) vulnerability in the file. Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. Public exploit code available.

Denial Of Service Vllm Redhat
NVD GitHub
CVE-2025-46722
EPSS 0% CVSS 4.2
MEDIUM PATCH Monitor

vLLM is an inference and serving engine for large language models (LLMs). Rated medium severity (CVSS 4.2), this vulnerability is remotely exploitable.

Information Disclosure Vllm Redhat
NVD GitHub
CVE-2025-46570
EPSS 0% CVSS 2.6
LOW PATCH Monitor

vLLM is an inference and serving engine for large language models (LLMs). Rated low severity (CVSS 2.6), this vulnerability is remotely exploitable.

Information Disclosure Vllm
NVD GitHub
CVE-2025-47277
EPSS 1% CVSS 9.8
CRITICAL POC PATCH Act Now

vLLM, an inference and serving engine for large language models (LLMs), has an issue in versions 0.6.5 through 0.8.4 that ONLY impacts environments using the `PyNcclPipe` KV cache transfer. Rated critical severity (CVSS 9.8), this vulnerability is remotely exploitable, no authentication required, low attack complexity. Public exploit code available.

Deserialization Vllm Pytorch +2
NVD GitHub
CVE-2025-30165
EPSS 1% CVSS 8.0
HIGH PATCH This Week

vLLM is an inference and serving engine for large language models. Rated high severity (CVSS 8.0), this vulnerability is low attack complexity. No vendor patch available.

RCE Deserialization Vllm +1
NVD GitHub
CVE-2025-46560
EPSS 1% CVSS 6.5
MEDIUM POC PATCH This Month

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. Public exploit code available and no vendor patch available.

Denial Of Service Vllm Redhat
NVD GitHub
CVE-2025-32444
EPSS 2% CVSS 10.0
CRITICAL POC PATCH Act Now

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated critical severity (CVSS 10.0), this vulnerability is remotely exploitable, no authentication required, low attack complexity. Public exploit code available.

RCE Deserialization Vllm +1
NVD GitHub
CVE-2025-30202
EPSS 0% CVSS 7.5
HIGH POC PATCH This Week

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated high severity (CVSS 7.5), this vulnerability is remotely exploitable, no authentication required, low attack complexity. Public exploit code available.

Information Disclosure Denial Of Service Vllm +1
NVD GitHub
CVE-2024-9053
EPSS 2% CVSS 9.8
CRITICAL POC Act Now

vllm-project vllm version 0.6.0 contains a vulnerability in the AsyncEngineRPCServer() RPC server entrypoints. Rated critical severity (CVSS 9.8), this vulnerability is remotely exploitable, no authentication required, low attack complexity. Public exploit code available and no vendor patch available.

RCE Deserialization Vllm +1
NVD
CVE-2024-11041
EPSS 1% CVSS 9.8
CRITICAL POC Act Now

vllm-project vllm version v0.6.2 contains a vulnerability in the MessageQueue.dequeue() API function. Rated critical severity (CVSS 9.8), this vulnerability is remotely exploitable, no authentication required, low attack complexity. Public exploit code available and no vendor patch available.

RCE Deserialization Vllm +1
NVD
CVE-2025-29783
EPSS 2% CVSS 9.0
CRITICAL PATCH Act Now

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated critical severity (CVSS 9.0), this vulnerability is low attack complexity. This Deserialization of Untrusted Data vulnerability could allow attackers to execute arbitrary code through malicious serialized objects.

RCE Deserialization Vllm +1
NVD GitHub
CVE-2025-29770
EPSS 0% CVSS 6.5
MEDIUM PATCH This Month

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated medium severity (CVSS 6.5), this vulnerability is remotely exploitable, low attack complexity. This Allocation of Resources Without Limits vulnerability could allow attackers to exhaust system resources through uncontrolled allocation.

Denial Of Service Vllm Redhat
NVD GitHub
CVE-2025-25183
EPSS 0% CVSS 2.6
LOW PATCH Monitor

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Rated low severity (CVSS 2.6), this vulnerability is remotely exploitable. No vendor patch available.

Python Information Disclosure Vllm
NVD GitHub
CVE-2025-24357
EPSS 1% CVSS 7.5
HIGH PATCH This Month

vLLM is a library for LLM inference and serving. Rated high severity (CVSS 7.5), this vulnerability is remotely exploitable, no authentication required. This Deserialization of Untrusted Data vulnerability could allow attackers to execute arbitrary code through malicious serialized objects.

RCE Deserialization Vllm +1
NVD GitHub

This site uses cookies essential for authentication and security. No tracking or analytics cookies are used. Privacy Policy