vLLM

View as Markdown

LLM Deployment using vLLM

Dynamo vLLM integrates vLLM engines into Dynamo’s distributed runtime, enabling disaggregated serving, KV-aware routing, and request cancellation while maintaining full compatibility with vLLM’s native engine arguments. Dynamo leverages vLLM’s native KV cache events, NIXL-based transfer mechanisms, and metric reporting to enable KV-aware routing and P/D disaggregation.

Installation

Install Latest Release

We recommend using uv to install:

$uv venv --python 3.12 --seed
$uv pip install "ai-dynamo[vllm]"

This installs Dynamo with the compatible vLLM version.


Container

We have public images available on NGC Catalog:

$docker pull nvcr.io/nvidia/ai-dynamo/vllm-runtime:<version>
$./container/run.sh -it --framework VLLM --image nvcr.io/nvidia/ai-dynamo/vllm-runtime:<version>
$python container/render.py --framework vllm --output-short-filename
$docker build -f container/rendered.Dockerfile -t dynamo:latest-vllm .
$./container/run.sh -it --framework VLLM [--mount-workspace]

Development Setup

For development, use the devcontainer which has all dependencies pre-installed.

Feature Support Matrix

FeatureStatusNotes
Disaggregated ServingPrefill/decode separation with NIXL KV transfer
KV-Aware Routing
SLA-Based Planner
KVBM
LMCache
FlexKV
Multimodal SupportVia vLLM-Omni integration
ObservabilityMetrics and monitoring
WideEPSupport for DeepEP
DP Rank RoutingHybrid load balancing via external DP rank control
LoRADynamic loading/unloading from S3-compatible storage
GB200 SupportContainer functional on main

Quick Start

Start infrastructure services for local development:

$docker compose -f deploy/docker-compose.yml up -d

Launch an aggregated serving deployment:

$cd $DYNAMO_HOME/examples/backends/vllm
$bash launch/agg.sh

Next Steps