LLM Deployment Examples#
This directory contains examples and reference implementations for deploying Large Language Models (LLMs) in various configurations.
Components#
workers: Prefill and decode worker handles actual LLM inference
router: Handles API requests and routes them to appropriate workers based on specified strategy
frontend: OpenAI compatible http server handles incoming requests
Deployment Architectures#
Aggregated#
Single-instance deployment where both prefill and decode are done by the same worker.
Disaggregated#
Distributed deployment where prefill and decode are done by separate workers that can scale independently.
sequenceDiagram participant D as VllmWorker participant Q as PrefillQueue participant P as PrefillWorker Note over D: Request is routed to decode D->>D: Decide if prefill should be done locally or remotely D->>D: Allocate KV blocks D->>Q: Put RemotePrefillRequest on the queue P->>Q: Pull request from the queue P-->>D: Read cached KVs from Decode D->>D: Decode other requests P->>P: Run prefill P-->>D: Write prefilled KVs into allocated blocks P->>D: Send completion notification Note over D: Notification received when prefill is done D->>D: Schedule decoding
Getting Started#
Choose a deployment architecture based on your requirements
Configure the components as needed
Deploy using the provided scripts
Prerequisites#
Start required services (etcd and NATS) using Docker Compose
docker compose -f deploy/metrics/docker-compose.yml up -d
Build docker#
# On an x86 machine
./container/build.sh --framework vllm
# On an ARM machine (ex: GB200)
./container/build.sh --framework vllm --platform linux/arm64
Note
Building a vLLM docker image for ARM machines currently involves building vLLM from source, which is known to have performance issues to require exgtensive system RAM; see vLLM Issue 8878.
You can tune the number of parallel build jobs for building VLLM from source
on ARM based on your available cores and system RAM with VLLM_MAX_JOBS
.
For example, on an ARM machine with low system resources:
./container/build.sh --framework vllm --platform linux/arm64 --build-arg VLLM_MAX_JOBS=2
For example, on a GB200 which has very high CPU cores and memory resource:
./container/build.sh --framework vllm --platform linux/arm64 --build-arg VLLM_MAX_JOBS=64
When vLLM has pre-built ARM wheels published, this process can be improved.
Run container#
./container/run.sh -it --framework vllm
Run Deployment#
This figure shows an overview of the major components to deploy:
+----------------+
+------| prefill worker |-------+
notify | | | |
finished | +----------------+ | pull
v v
+------+ +-----------+ +------------------+ push +---------------+
| HTTP |----->| processor |----->| decode/monolith |------------>| prefill queue |
| |<-----| |<-----| worker | | |
+------+ +-----------+ +------------------+ +---------------+
| ^ |
query best | | return | publish kv events
worker | | worker_id v
| | +------------------+
| +---------| kv-router |
+------------->| |
+------------------+
Note
The planner component is enabled by default for all deployment architectures but is set to no-op mode. This means the planner observes metrics but doesn’t take scaling actions. To enable active scaling, you can add --Planner.no-operation=false
to your dynamo serve
command. For more details, see Planner.
Example architectures#
Note
For a non-dockerized deployment, first export DYNAMO_HOME
to point to the dynamo repository root, e.g. export DYNAMO_HOME=$(pwd)
Aggregated serving#
cd $DYNAMO_HOME/examples/llm
dynamo serve graphs.agg:Frontend -f ./configs/agg.yaml
Aggregated serving with KV Routing#
cd $DYNAMO_HOME/examples/llm
dynamo serve graphs.agg_router:Frontend -f ./configs/agg_router.yaml
Disaggregated serving#
cd $DYNAMO_HOME/examples/llm
dynamo serve graphs.disagg:Frontend -f ./configs/disagg.yaml
Disaggregated serving with KV Routing#
cd $DYNAMO_HOME/examples/llm
dynamo serve graphs.disagg_router:Frontend -f ./configs/disagg_router.yaml
Client#
In another terminal:
# this test request has around 200 tokens isl
curl localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"messages": [
{
"role": "user",
"content": "In the heart of Eldoria, an ancient land of boundless magic and mysterious creatures, lies the long-forgotten city of Aeloria. Once a beacon of knowledge and power, Aeloria was buried beneath the shifting sands of time, lost to the world for centuries. You are an intrepid explorer, known for your unparalleled curiosity and courage, who has stumbled upon an ancient map hinting at ests that Aeloria holds a secret so profound that it has the potential to reshape the very fabric of reality. Your journey will take you through treacherous deserts, enchanted forests, and across perilous mountain ranges. Your Task: Character Background: Develop a detailed background for your character. Describe their motivations for seeking out Aeloria, their skills and weaknesses, and any personal connections to the ancient city or its legends. Are they driven by a quest for knowledge, a search for lost familt clue is hidden."
}
],
"stream":false,
"max_tokens": 30
}'
Multinode deployment#
See Multinode Examples for more details.
Close deployment#
See Close deployment in the Dynamo Run topic to learn about how to close the deployment.
Deploy to Kubernetes#
These examples can be deployed to a Kubernetes cluster using Dynamo Cloud and the Dynamo CLI.
Prerequisites#
You must have first followed the instructions in dynamo/deploy/cloud/helm/README.md to install Dynamo Cloud on your Kubernetes cluster.
Note
The KUBE_NS
variable in the following steps must match the Kubernetes namespace where you installed Dynamo Cloud. You must also expose the dynamo-store
service externally. This will be the endpoint the CLI uses to interface with Dynamo Cloud.
Deployment Steps#
For detailed deployment instructions, please refer to the Operator Deployment Guide. The following are the specific commands for the LLM examples:
# Set your project root directory
export PROJECT_ROOT=$(pwd)
# Configure environment variables (see operator_deployment.md for details)
export KUBE_NS=dynamo-cloud
export DYNAMO_CLOUD=http://localhost:8080 # If using port-forward
# OR
# export DYNAMO_CLOUD=https://dynamo-cloud.nvidia.com # If using Ingress/VirtualService
# Build the Dynamo base image (see operator_deployment.md for details)
export DYNAMO_IMAGE=<your-registry>/<your-image-name>:<your-tag>
# Build the service
cd $PROJECT_ROOT/examples/llm
DYNAMO_TAG=$(dynamo build graphs.agg:Frontend | grep "Successfully built" | awk '{ print $NF }' | sed 's/\.$//')
# Deploy to Kubernetes
export DEPLOYMENT_NAME=llm-agg
dynamo deployment create $DYNAMO_TAG -n $DEPLOYMENT_NAME -f ./configs/agg.yaml
Note: Optionally add --Planner.no-operation=false
at the end of the deployment command to enable the planner component to take scaling actions on your deployment.
Testing the Deployment#
Once the deployment is complete, you can test it using:
# Find your frontend pod
export FRONTEND_POD=$(kubectl get pods -n ${KUBE_NS} | grep "${DEPLOYMENT_NAME}-frontend" | sort -k1 | tail -n1 | awk '{print $1}')
# Forward the pod's port to localhost
kubectl port-forward pod/$FRONTEND_POD 8000:8000 -n ${KUBE_NS}
# Test the API endpoint
curl localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"messages": [
{
"role": "user",
"content": "In the heart of Eldoria, an ancient land of boundless magic and mysterious creatures, lies the long-forgotten city of Aeloria. Once a beacon of knowledge and power, Aeloria was buried beneath the shifting sands of time, lost to the world for centuries. You are an intrepid explorer, known for your unparalleled curiosity and courage, who has stumbled upon an ancient map hinting at ests that Aeloria holds a secret so profound that it has the potential to reshape the very fabric of reality. Your journey will take you through treacherous deserts, enchanted forests, and across perilous mountain ranges. Your Task: Character Background: Develop a detailed background for your character. Describe their motivations for seeking out Aeloria, their skills and weaknesses, and any personal connections to the ancient city or its legends. Are they driven by a quest for knowledge, a search for lost familt clue is hidden."
}
],
"stream":false,
"max_tokens": 30
}'
For more details on managing deployments, testing, and troubleshooting, please refer to the Operator Deployment Guide.