CVE-2026-25960 PUBLISHED CVSS 7.099999904632568 HIGH

vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.

EPSS 0.02% · 5.5th percentile

Risk Scores

CVSS v3.1
7.099999904632568
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:L
EPSS Score
0.02%
5.5th percentile

Affected Products

VendorProductVersions
vllm-projectvllm>= 0.15.1, < 0.17.0, >= 0.15.1, < 0.17.0, >= 0.15.1, < 0.17.0
PyPIvllm0.15.1, 0.15.1, 0.15.1
vllmvllm0.15.1, 0.15.1, 0.15.1

Timeline

References

Open in Interactive Console →