Deploying Dynamo Inference Graphs to Kubernetes#
This guide provides an overview of the different deployment options available for Dynamo inference graphs in Kubernetes environments.
Deployment Options#
Dynamo provides two distinct deployment paths, each serving different use cases:
1. 🚀 Dynamo Cloud Kubernetes Platform [PREFERRED]#
The Dynamo Cloud Platform (deploy/dynamo/helm/
) provides a managed deployment experience:
Contains the infrastructure components required for the Dynamo cloud platform
Used when deploying with the
dynamo deploy
CLI commandsProvides a managed deployment experience
For detailed instructions on using the Dynamo Cloud Platform, see:
Dynamo Cloud Platform Guide: walks through installing and configuring the Dynamo cloud components on your Kubernetes cluster.
2. Manual Deployment with Helm Charts#
The manual deployment path (deploy/Kubernetes/
) is available for users who need more control over their deployments:
Used for manually deploying inference graphs to Kubernetes
Contains Helm charts and configurations for deploying individual inference pipelines
Provides full control over deployment parameters
Requires manual management of infrastructure components
Documentation:
Manual Helm Deployment Guide: detailed instructions on manual deployment
[Deploying Dynamo Inference Graphs to Kubernetes using Helm](manual_helm_deployment.md#Deploying Dynamo Inference Graphs to Kubernetes using Helm): all-in-one script
Getting Started#
For Dynamo Cloud Platform:
Follow the Dynamo Cloud Platform Guide
Deploy a Hello World pipeline using the Operator Deployment Guide
Deploy a Dynamo LLM pipeline to Kubernetes Deploy LLM Guide
For Manual Deployment:
Follow the Manual Helm Deployment Guide
Example Deployment#
See the [Hello World example](…/…/examples/hello_world.md#Deploying to and Running the Example in Kubernetes) for a complete walkthrough of deploying a simple inference graph.
See the LLM example for a complete walkthrough of deploying a production-ready LLM inference pipeline to Kubernetes.