Getting Started#
Development Environment#
This section describes how to set up your development environment.
Recommended Setup: Using Dev Container#
We recommend using our pre-configured development container:
Install prerequisites:
Get the code:
git clone https://github.com/ai-dynamo/dynamo.git cd dynamo
Open in Visual Studio Code:
Launch Visual Studio Code
Click the button in the bottom-left corner
Select Reopen in Container
Visual Studio Code builds and starts a container with all necessary dependencies for Dynamo development.
Alternative Setup: Manual Installation#
If you don’t want to use the dev container, you can set the environment up manually:
Ensure you have:
Ubuntu 24.04 (recommended)
x86_64 CPU
Python 3.x
Git
See Support Matrix for more information.
Install required system packages:
apt-get update DEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev python3-pip python3-venv libucx0
Set up Python environment:
python3 -m venv venv source venv/bin/activate
Install Dynamo:
pip install "ai-dynamo[all]"
Note
To ensure compatibility, use the examples in the release branch or tag that matches the version you installed.
Building the Dynamo Base Image#
Although not needed for local development, deploying your Dynamo pipelines to Kubernetes requires you to build and push a Dynamo base image to your container registry. You can use any container registry of your choice, such as:
Docker Hub (docker.io)
NVIDIA NGC Container Registry (nvcr.io)
Any private registry
To build it:
./container/build.sh
docker tag dynamo:latest-vllm <your-registry>/dynamo-base:latest-vllm
docker login <your-registry>
docker push <your-registry>/dynamo-base:latest-vllm
This documentation describes these frameworks:
After building, use this image by setting the DYNAMO_IMAGE
environment variable to point to your built image:
export DYNAMO_IMAGE=<your-registry>/dynamo-base:latest-vllm
Running and Interacting with an LLM Locally#
To run a model and interact with it locally, call dynamo run
with a Hugging Face model. dynamo run
supports several backends, including: mistralrs
, sglang
, vllm
, and tensorrtllm
.
Example Command#
dynamo run out=vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B
? User › Hello, how are you?
✔ User · Hello, how are you?
Okay, so I'm trying to figure out how to respond to the user's greeting.
They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking."
Hmm, I need to come up with a suitable reply. ...
LLM Serving#
Dynamo provides a simple way to spin up a local set of inference components including:
OpenAI Compatible Frontend–High performance OpenAI compatible http api server written in Rust.
Basic and Kv Aware Router–Route and load balance traffic to a set of workers.
Workers–Set of pre-configured LLM serving engines.
To run a minimal configuration, use a pre-configured example.
Start Dynamo Distributed Runtime Services#
To start the Dynamo Distributed Runtime services the first time:
docker compose -f deploy/docker-compose.yml up -d
Start Dynamo LLM Serving Components#
Next, serve a minimal configuration with an http server, basic round-robin router, and a single worker.
cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
Send a Request#
curl localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"stream":false,
"max_tokens": 300
}' | jq
Local Development#
If you use vscode or cursor, use the .devcontainer folder built on Microsofts Extension. For instructions, see the ReadMe.
Otherwise, to develop locally, we recommend working inside of the container:
./container/build.sh
./container/run.sh -it --mount-workspace
cargo build --release
mkdir -p /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/http /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/llmctl /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/dynamo-run /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
uv pip install -e .
export PYTHONPATH=$PYTHONPATH:/workspace/deploy/dynamo/sdk/src:/workspace/components/planner/src
Conda Environment#
Alternatively, use a Conda environment:
conda activate <ENV_NAME>
pip install nixl # Or install https://github.com/ai-dynamo/nixl from source
cargo build --release
# To install ai-dynamo-runtime from source
cd lib/bindings/python
pip install .
cd ../../../
pip install .[all]
# To test
docker compose -f deploy/docker-compose.yml up -d
cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml