Running SGLang with Dynamo#
This directory contains an SGLang component for Dynamo and reference implementations for deploying Large Language Models (LLMs) in various configurations using SGLang. SGLang internally uses ZMQ to communicate between the ingress and the engine processes. For Dynamo, we leverage the runtime to communicate directly with the engine processes and handle ingress and pre/post processing on our end.
Use the Latest Release#
We recommend using the latest stable release of dynamo to avoid breaking changes:
You can find the latest release here and check out the corresponding branch with:
git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
Table of Contents#
Feature Support Matrix#
Core Dynamo Features#
Feature |
SGLang |
Notes |
---|---|---|
✅ |
||
🚧 |
WIP PR |
|
✅ |
||
✅ |
||
❌ |
Planned |
|
❌ |
Planned |
Large Scale P/D and WideEP Features#
Feature |
SGLang |
Notes |
---|---|---|
WideEP |
✅ |
Full support on H100s/GB200 |
DP Rank Routing |
🚧 |
Direct routing supported. Dynamo KV router does not router to DP worker |
GB200 Support |
✅ |
SGLang Quick Start#
Below we provide a guide that lets you run all of our common deployment patterns on a single node.
Start NATS and ETCD in the background#
Start using Docker Compose
docker compose -f deploy/docker-compose.yml up -d
Install ai-dynamo[sglang]
#
Install latest release#
We suggest using uv to install the latest release of ai-dynamo[sglang]. You can install it with curl -LsSf https://astral.sh/uv/install.sh | sh
# create a virtual env
uv venv --python 3.12 --seed
# install the latest release
uv pip install "ai-dynamo[sglang]"
Installing editable version for development#
Instructions
This requires having rust installed. We also recommend having a proper installation of the cuda toolkit as sglang requires nvcc
to be available.
# create a virtual env
uv venv --python 3.12 --seed
# build dynamo runtime bindings
uv pip install maturin
cd $DYNAMO_HOME/lib/bindings/python
maturin develop --uv
cd $DYNAMO_HOME
uv pip install .
export PYTHONPATH="${PYTHONPATH}:$(pwd)/components/backends/sglang/src"
# install target sglang version (you can choose any version)
# we include the prerelease flag in order to install flashinfer rc versions
uv pip install --prerelease=allow sglang[all]==0.4.9.post6
Using prebuilt docker containers#
Instructions
docker pull nvcr.io/nvidia/ai-dynamo/sglang-runtime:0.3.2
Building docker container from source#
Instructions
./container/build.sh --framework sglang
# run container using prebuild wheel
./container/run.sh --framework sglang -it
# mount workspace for development
./container/run.sh --framework sglang --mount-workspace
Run Single Node Examples#
Important
Each example corresponds to a simple bash script that runs the OpenAI compatible server, processor, and optional router (written in Rust) and LLM engine (written in Python) in a single terminal. You can easily take each command and run them in separate terminals.
Additionally - because we use sglang’s argument parser, you can pass in any argument that sglang supports to the worker!
Aggregated Serving#
cd $DYNAMO_HOME/components/backends/sglang
./launch/agg.sh
Aggregated Serving with KV Routing#
Note
Until sglang releases a version > v0.5.0rc0, you will have to install from source to use kv_routing. You can do this by running git clone https://github.com/sgl-project/sglang.git && cd sglang && uv pip install -e "python[all]"
. We will update this section once sglang releases a newer version.
cd $DYNAMO_HOME/components/backends/sglang
./launch/agg_router.sh
Disaggregated serving#
Under the hood: SGLang Load Balancer vs Dynamo Discovery
SGLang uses a mini load balancer to route requests to handle disaggregated serving. The load balancer functions as follows:
The load balancer receives a request from the client
A random
(prefill, decode)
pair is selected from the pool of available workersRequest is sent to both
prefill
anddecode
workers via asyncio tasksInternally disaggregation is done from prefill -> decode
Because Dynamo has a discovery mechanism, we do not use a load balancer. Instead, we first route to a random prefill worker, select a random decode worker, and then send the request to both. Internally, SGLang’s bootstrap server (which is a part of the tokenizer_manager
) is used in conjuction with NIXL to handle the kv transfer.
Important
Disaggregated serving in SGLang currently requires each worker to have the same tensor parallel size unless you are using an MLA based model
cd $DYNAMO_HOME/components/backends/sglang
./launch/disagg.sh
Disaggregated Serving with Mixture-of-Experts (MoE) models and DP attention#
You can use this configuration to test out disaggregated serving with dp attention and expert parallelism on a single node before scaling to the full DeepSeek-R1 model across multiple nodes.
# note this will require 4 GPUs
cd $DYNAMO_HOME/components/backends/sglang
./launch/disagg_dp_attn.sh
When using MoE models, you can also use the our implementation of the native SGLang endpoints to record expert distribution data. The disagg_dp_attn.sh
script automatically sets up the SGLang HTTP server, the environment variable that controls the expert distribution recording directory, and sets up the expert distribution recording mode to stat
. You can learn more about expert parallelism load balancing here.
Testing the Deployment#
Send a test request to verify your deployment:
curl localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen/Qwen3-0.6B",
"messages": [
{
"role": "user",
"content": "Explain why Roger Federer is considered one of the greatest tennis players of all time"
}
],
"stream": true,
"max_tokens": 30
}'
Request Migration#
You can enable request migration to handle worker failures gracefully. Use the --migration-limit
flag to specify how many times a request can be migrated to another worker:
python3 -m dynamo.sglang ... --migration-limit=3
This allows a request to be migrated up to 3 times before failing. See the Request Migration Architecture documentation for details on how this works.
Advanced Examples#
Below we provide a selected list of advanced examples. Please open up an issue if you’d like to see a specific example!
Run a multi-node sized model#
Large scale P/D disaggregation with WideEP#
Hierarchical Cache (HiCache)#
Deployment#
We currently provide deployment examples for Kubernetes and SLURM.