CVE-2026-27893 PUBLISHED CVSS 8.800000190734863 HIGH

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.

Risk Scores

CVSS v3.1
8.800000190734863
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Affected Products

VendorProductVersions
vllm-projectvllm>= 0.10.1, < 0.18.0, >= 0.10.1, < 0.18.0, >= 0.10.1, < 0.18.0
PyPIvllm0.10.1, 0.10.1, 0.10.1
vllmvllm0.10.1

Timeline

References

Open in Interactive Console →