Dynamo Examples#

This directory contains practical examples demonstrating how to deploy and use Dynamo for distributed LLM inference. Each example includes setup instructions, configuration files, and explanations to help you understand different deployment patterns and use cases.

Want to see a specific example? Open a GitHub issue to request an example you’d like to see, or open a pull request if you’d like to contribute your own!

Basics & Tutorials#

Learn fundamental Dynamo concepts through these introductory examples:

  • Quickstart - Simple aggregated serving example with vLLM backend

  • Disaggregated Serving - Prefill/decode separation for enhanced performance and scalability

  • Multi-node - Distributed inference across multiple nodes and GPUs

Deployment Examples#

Platform-specific deployment guides for production environments:

  • Amazon EKS - Deploy Dynamo on Amazon Elastic Kubernetes Service

  • Azure AKS - Deploy Dynamo on Azure Kubernetes Service

  • Router Standalone - Standalone router deployment patterns

  • Amazon ECS - Coming soon

  • Google GKE - Coming soon

  • Ray - Coming soon

  • NVIDIA Cloud Functions (NVCF) - Coming soon

Runtime Examples#

Low-level runtime examples for developers using Python<>Rust bindings:

  • Hello World - Minimal Dynamo runtime service demonstrating basic concepts

Getting Started#

  1. Choose your deployment pattern: Start with the Quickstart for a simple local deployment, or explore Disaggregated Serving for advanced architectures.

  2. Set up prerequisites: Most examples require etcd and NATS services. You can start them using:

    docker compose -f deploy/metrics/docker-compose.yml up -d
    
  3. Follow the example: Each directory contains detailed setup instructions and configuration files specific to that deployment pattern.

Prerequisites#

Before running any examples, ensure you have:

  • Docker & Docker Compose - For containerized services

  • CUDA-compatible GPU - For LLM inference (except hello_world, which is non-GPU aware)

  • Python 3.9++ - For client scripts and utilities

  • Kubernetes cluster - For any cloud deployment/K8s examples

Framework Support#

These examples show how Dynamo broadly works using major inference engines.

If you want to see advanced, framework-specific deployment patterns and best practices, check out the Components Workflows directory:

  • vLLM – vLLM-specific deployment and configuration

  • SGLang – SGLang integration examples and workflows

  • TensorRT-LLM – TensorRT-LLM workflows and optimizations